Honestly the main problem is people not knowing when to use OO. Objects are great, but not the solution to every problem. It also depends on your goals and the platform you're working on. To give an example. If you're writing code for a small embedded system you should be careful. Using OO is less effective than other approaches in terms of memory use. That's a well known fact. And don't even think about garbage collecting. Now you might say "just use a microcontroller with a larger memory". The additional cost on a single device is indeed small. Now step into a production environment where you make thousands of units. A few cents more is going to make a large difference. And don't start claiming it's a good idea cause Arduino sort of gets away with it. We also live in a world where power efficiency is becoming important as more and more devices are battery powered. More powerful microcontrollers and processors don't perform all too well in that aspect. The only solution is to program in C or go all the way for assembly. This leads to hard to manage code. But there doesn't seem to be much of a way in between. Due to the latter I've grown rather fond of VHDL (and other similar languages for that matter) lately. It's still quite different from OO programming but has a lot of similarities as well I find. Sadly it's not well suited for writing computer programs. Another issue is that most programmers are too bogged down in conventional programming paradigms to become any good at it. But this probably isn't very relevant to most people here. To sum up my argument: Use OO when it gives you an advantage. Don't when it overcomplicates things.
U
User 4389162
@User 4389162