The Software Architecture Demon
-
That's just coding. Just last night I had to 1. Determine why a driver was randomly dropping characters from strings printed to a screen, but only if they were small text. Turns out the driver wasn't tested very well with my hardware and i had to modify the timing of it. 2. Determine why text wasn't displaying after i put a solid white background on it. I have to draw it twice. I still don't know why. See also, dodgy driver. 3. Implement my own HTTP chunked encoding scheme just so I could bulk upload some JSON from a machine with a total of just over 500k of ram. Worse, I had to timestamp my uploads with a valid date time but my machine has no clock. I was ... creative. #1 and #2 sent me to some forums to post questions for which I got no answers. #3 simply took hours. That's just development, so when I say you've got a good handle on things I still think you do. Your machine isn't on fire, you're not going bald from stress, and you're not seriously contemplating a career in pizza delivery if you make it out of this project alive. You're fine. :thumbsup:
Real programmers use butterflies
honey the codewitch wrote:
Your machine isn't on fire, you're not going bald from stress, and you're not seriously contemplating a career in pizza delivery
How do you know all these things? Are you watching me!? :~ There's actually quite some stuff stressing me out at the moment, that stupid API being one of them :laugh: What I fear most at the moment is that when Christmas comes I can't take my well-deserved two weeks off because my work isn't done yet :omg:
Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
-
honey the codewitch wrote:
Your machine isn't on fire, you're not going bald from stress, and you're not seriously contemplating a career in pizza delivery
How do you know all these things? Are you watching me!? :~ There's actually quite some stuff stressing me out at the moment, that stupid API being one of them :laugh: What I fear most at the moment is that when Christmas comes I can't take my well-deserved two weeks off because my work isn't done yet :omg:
Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
You've downloaded my code before, silly. So of course I am always watching you. Through your computer. What the hell did you think most of those "parsers" actually were anyway? Why else would I write 20 incomprehensible but nevertheless popular projects for people to download? Spyware, my good man. The money is great. By the way, reset your passwords.
Real programmers use butterflies
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
My first recollection of "software life" was 5 years. The "architecture" issue I see is most are into "piece" work; without caring / knowing / wondering how it fits into a bigger picture. The million (code) monkeys and a million keyboards and eventually we get some useful algorithms / patterns.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it. ― Confucian Analects: Rules of Confucius about his food
-
You've downloaded my code before, silly. So of course I am always watching you. Through your computer. What the hell did you think most of those "parsers" actually were anyway? Why else would I write 20 incomprehensible but nevertheless popular projects for people to download? Spyware, my good man. The money is great. By the way, reset your passwords.
Real programmers use butterflies
:laugh: I know for a fact that's not true. The horrors you'd have seen on my computer would've left you blind and unable to type that message :D Many a ransomware criminals have paid me to let them unlock my computer ;p On the other hand, you use braceless if-statements and I can't think of more unspeakable abominations than that :~
Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
-
:laugh: I know for a fact that's not true. The horrors you'd have seen on my computer would've left you blind and unable to type that message :D Many a ransomware criminals have paid me to let them unlock my computer ;p On the other hand, you use braceless if-statements and I can't think of more unspeakable abominations than that :~
Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
I have spent some time spelunking the depths of coding depravity it's true, but just look at these gems I've found! my precious! Your computer is tame. I don't even see a dodgy and outdated copy of GRUB in your bootloader code. Where is your sense of adventure?
Real programmers use butterflies
-
I have spent some time spelunking the depths of coding depravity it's true, but just look at these gems I've found! my precious! Your computer is tame. I don't even see a dodgy and outdated copy of GRUB in your bootloader code. Where is your sense of adventure?
Real programmers use butterflies
Ok, you've convinced me, changing my passwords now :laugh:
Best, Sander Azure DevOps Succinctly (free eBook) Azure Serverless Succinctly (free eBook) Migrating Apps to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
OK, I had to look up UML...heard of it long ago and found it useless. :laugh: I'd rather keep these diagrams in my head...much easier to update/maintain! IMHO, a good database design tells the whole story. Most abstraction can be handled in views/procs. (talking lob apps here) BTW, in 20+ years I've never seen a specifications document/plan. The closest thing might be an occasional UI mockup in a screen grab or worse, scribbled on a notepad. (or even worse, a screen grab of an image of scribbling on a notepad! :| )
"Go forth into the source" - Neal Morse "Hope is contagious"
-
OK, I had to look up UML...heard of it long ago and found it useless. :laugh: I'd rather keep these diagrams in my head...much easier to update/maintain! IMHO, a good database design tells the whole story. Most abstraction can be handled in views/procs. (talking lob apps here) BTW, in 20+ years I've never seen a specifications document/plan. The closest thing might be an occasional UI mockup in a screen grab or worse, scribbled on a notepad. (or even worse, a screen grab of an image of scribbling on a notepad! :| )
"Go forth into the source" - Neal Morse "Hope is contagious"
Yeah it's a little different when you're not doing business apps. IoT devices, developer tools, that sort of thing, you don't have a database to go by necessarily. Although I'd also argue that any validation those procedures are doing in them should be done on the front end as well to avoid bad/spurious network traffic. If they are well designed, hopefully they add to the story. :)
Real programmers use butterflies
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
Quote:
decoupling software from itself
I like that phrase! Though I wish, when it comes to people, some people were more decoupled from themselves, and others less decoupled. :laugh:
Latest Articles:
Thread Safe Quantized Temporal Frame Ring Buffer -
Quote:
decoupling software from itself
I like that phrase! Though I wish, when it comes to people, some people were more decoupled from themselves, and others less decoupled. :laugh:
Latest Articles:
Thread Safe Quantized Temporal Frame Ring Buffer -
Yes to this. Glad I have some support here. Everyone but you and Sander are all sideeying me now. :laugh:
Real programmers use butterflies
honey the codewitch wrote:
Everyone but you and Sander are all sideeying me now.
Nah we're not. I've thought similarly for a while now. Sticky-tape solutions are appropriate in all sorts of places. Slapping a newsletter on the fridge? Sticky-tape. Putting up a car-port? Bolts. How much of the world does it Slapping a newsletter on the fridge? Measure the thickness of the door's steel, weigh the newsletter, calculate load-bearing ability of door skin, add reinforcement to handle larger photos in the future, drill and countersink holes, punch holes in corner of picture, use supplied allen-key to fasten bolts that secure the pic.
-
honey the codewitch wrote:
not spend odd hours studying someone else's work just so they can use it
Part of the architecture is to structure the system, or the code, exactly so that people who want to do this can do it, and are not bothered with higher level topics.
honey the codewitch wrote:
I'm also going to come out and say it makes things harder to maintain
No. Over-engineered code or undocumented code is hard to maintain, whether it has been created based on highly sophisticated design patterns and architecture principles or "by hand", but you cannot say that using architecture design always makes code harder to maintain. 15 year old multi threaded spaghetti code resulting from a 15-year-old-company-time one guy developer show is hard to maintain. Always. Actually, UML or SysML are tools, and as every tool, they should be used adequately to fulfil a certain purpose to make sense. I agree that using a tool just because you can is not a good strategy, but on the other side and like any tool, they can come very handy if well used.
Thing is I hardly ever see development task that you could do and "not be bothered with higher level topics". From my experience, you have vertical integration in the system from fronted to database and to implement a feature that is useful for a user you have to have insights in all those layers. Of course there are some local fixes, but usually you affect some other part anyway. For most of other stuff you have to have insight what user will do, what business wants to achieve, what is general direction of a system architecture.
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
My take on this is that people feel that if they won't "foresee and prevent" some issues like duplication of code they are lesser coders. I try to explain that we should use rule of 3, so if it is in 2 places that is still not duplication of code. But what I get in code reviews and discussions: "you violate DRY" like it would be some holy grail and you are lesser human if you have 2 lines that look alike. Reality is, that this emotional problem not technical. Usually people want to do a good job or be better than others in their work. I can point out 10 logical reasons for why those 2 lines should not be changed into single line, but it is still not going to convince someones pride.
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
-
Thing is I hardly ever see development task that you could do and "not be bothered with higher level topics". From my experience, you have vertical integration in the system from fronted to database and to implement a feature that is useful for a user you have to have insights in all those layers. Of course there are some local fixes, but usually you affect some other part anyway. For most of other stuff you have to have insight what user will do, what business wants to achieve, what is general direction of a system architecture.
Ever worked for embedded world (with multiple layer ofSW from different companies) or for DoD (where SW developer A does not know what the guy sitting next to him is coding for) ?
-
I agree to a point.
Rage wrote:
Part of the architecture is to structure the system, or the code, exactly so that people who want to do this can do it, and are not bothered with higher level topics.
This is how it should be. In my professional experience it was sometimes the case that a software project would be designed appropriately for its size and the team situation. In many cases, it simply wasn't. People would endlessly decouple things that only one person was ever going to work on, and this kind of thing happens all the time. The design would end up taking up the majority of the bandwidth even well past the design phase after the project was supposed to be nailed down. I've seen projects deathmarch over it even. Basically the project was thought to death. Is it as common as badly designed or simply undesigned software? No. Is it destructive and harmful to projects? Yes! I guess to sound cliche it's about moderation. You have to make the design appropriate for a project. I'm not dismissing UML entirely either. But it's is one of those things that strikes as having the perception of being far more useful than it actually is.
Real programmers use butterflies
honey the codewitch wrote:
People would endlessly decouple things that only one person was ever going to work on, and this kind of thing happens all the time.
There are also people who religiously follow a template procedure for coding, irrespective of how the project is currently organised. For example, in a project which uses OOP practices - so normally if you have a Widget id and want the Widget object, you'd call the static method Widget.Find(id) - I've worked with people who write an IWidgetFinder interface, then a WidgetFinder class with a constructor which takes a delegate function to handle errors; so when it's called you first instantiate the WidgetFinder with the error handler, then you can call WindgetFinder.Find(id)! All this repeated for dozens of trivial functions with interfaces which are only ever going to be used by one class and classes that are only used from one place in the project with the same error handling that's used everywhere! And, in this project, much of the time the end result comes down to an EF call like...
DBcontext.Widgets.Where(w => w.id == id).First();
-
honey the codewitch wrote:
People would endlessly decouple things that only one person was ever going to work on, and this kind of thing happens all the time.
There are also people who religiously follow a template procedure for coding, irrespective of how the project is currently organised. For example, in a project which uses OOP practices - so normally if you have a Widget id and want the Widget object, you'd call the static method Widget.Find(id) - I've worked with people who write an IWidgetFinder interface, then a WidgetFinder class with a constructor which takes a delegate function to handle errors; so when it's called you first instantiate the WidgetFinder with the error handler, then you can call WindgetFinder.Find(id)! All this repeated for dozens of trivial functions with interfaces which are only ever going to be used by one class and classes that are only used from one place in the project with the same error handling that's used everywhere! And, in this project, much of the time the end result comes down to an EF call like...
DBcontext.Widgets.Where(w => w.id == id).First();
YES! This kind of thing. It's unnecessary. Code should be as simple as it can be and no simpler.
Real programmers use butterflies
-
I used to be a software architect. I think that's part of why I employ such a jaundiced eye when it comes to layered service architectures and sweeping design patterns just because and drowning in UML because reasons. It's true that when you're dealing with million dollar implementations, multiple deployment points, and disparate teams a lot of this abstraction can be useful. But how common is that in most people's development? I know it is for some of you, sure, but I think you're in the minority, or at least projects like these are in the minority. Not everyone is Plum Creek or Alcoa. It seems like the field of software architecture has taken on a life of its own and coupled with CPU cores to waste and infinite scaling out it has - and i'll just say it - poisoned software development. Just because you know how to do something doesn't mean you should. Most software application architectures do not survive contact with clients plus the erosion of time. They have a shelf life of significantly less than 10 years without some major portion of them being retooled. There are exceptions to this, but designing every solution to be that exception is a waste of time, money and creative energy. I'm also going to come out and say it makes things harder to maintain. When you're working with 20 different classes and interfaces where 3 would do it just increases the learning curve. There are definitely diminishing returns when it comes to decoupling software from itself, and you run into the cost/benefit wall pretty fast. It can only take you so far. It's best not to overdo it. Every fancy little UML entity you drop into your project increases the cognitive load of your project for other developers. Personally, I wouldn't care about that, because "cognitive load" is fun as far as I'm concerned but most people just want to do their work and go home, not spend odd hours studying someone else's work just so they can use it. Keep It Simple Stupid. Whatever happened to that? :sigh:
Real programmers use butterflies
After 5 years of academic research on the subject and 6 years of commercial R&D as primarly a software architect, my conclusions are very similar to yours. A low learning curve and straightforward structure that only slowly gains complexity over time, is the best possible outcome. Now, for the last 2 years of working in an almost-enterprise level company, I also notice that almost no-one correctly values that conclusion. Some scoff at the simplicity, and take it as a personal challenge of sorts, because they've been openly passionate about more intricate solutions in the past. Some drastically undervalue the effort involved, as they equate "simple" with "not a lot of work" and immediately try to displace it with, somewhat ironically, an off-the-cuff idea that's incomplete and more complex to execute. Let me give you some advice on how to deal with these people. Instead of explaining why a low learning curve is an integral part of a good design, let them present a small practical example of their own proposal, and give them an audience that judges them on how easy it is to understand, and how easy it will be to maintain. In my experience, most people take on the bait in a heartbeat. About half quit once they realize their mistake (they tend to recognize the 1+ pages of "having to explain basic stuff first" as a failure and a lesson in humility) and the ones that do present their solution, often feel embarrassed about the whole thing once they realize no-one really understands the words they are saying. Lessons will be learned and egos will be bruised. Keep a respectful tone throughout and you'll manage just fine.
-
After 5 years of academic research on the subject and 6 years of commercial R&D as primarly a software architect, my conclusions are very similar to yours. A low learning curve and straightforward structure that only slowly gains complexity over time, is the best possible outcome. Now, for the last 2 years of working in an almost-enterprise level company, I also notice that almost no-one correctly values that conclusion. Some scoff at the simplicity, and take it as a personal challenge of sorts, because they've been openly passionate about more intricate solutions in the past. Some drastically undervalue the effort involved, as they equate "simple" with "not a lot of work" and immediately try to displace it with, somewhat ironically, an off-the-cuff idea that's incomplete and more complex to execute. Let me give you some advice on how to deal with these people. Instead of explaining why a low learning curve is an integral part of a good design, let them present a small practical example of their own proposal, and give them an audience that judges them on how easy it is to understand, and how easy it will be to maintain. In my experience, most people take on the bait in a heartbeat. About half quit once they realize their mistake (they tend to recognize the 1+ pages of "having to explain basic stuff first" as a failure and a lesson in humility) and the ones that do present their solution, often feel embarrassed about the whole thing once they realize no-one really understands the words they are saying. Lessons will be learned and egos will be bruised. Keep a respectful tone throughout and you'll manage just fine.
That's some great advice. I'm freelance now doing IoT stuff so there's no room for GoF patterns and UML in my code - simple is king, and I love it but I'll definitely give what you said a try should I find myself in that role again.
Real programmers use butterflies
-
Ever worked for embedded world (with multiple layer ofSW from different companies) or for DoD (where SW developer A does not know what the guy sitting next to him is coding for) ?
I worked in a couple of places but maybe because I would not fit in "just code that, no questions asked" approach I am biased.