OO is not all that and a bag of chips
-
I think I probably muddied my point with my lament at the end about coding being worse off for OO. It was intended as a kind of rye way of saying it's been overused so much that maybe it has been harmful overall. I use OO myself where I find it's appropriate. My post shouldn't be read as a universal condemnation of it. It's more about how it's often used.
Real programmers use butterflies
TLDR; Most excesses are bad.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
I know what aspects are. To be clear: Where they tie in here, is you create "services" for your classes - implemented as templates that take your own class as a template argument. Those allow you to encapsulate orthogonal class services to a degree. It doesn't always allow for complex cross-cutting scenarios, and arbitrarily wrapping methods with pre and post code takes some vtbl foolery (microsoft takes this approach with certain ATL internals) but you get your aspects this way. It's kind of primitive and just as "kludgy but powerful" as templates are. Between the above and using type traits and such you can get it to do a lot of what you would do with aspect oriented builtins in a higher level language
Real programmers use butterflies
I think I see what you're getting at. Is there an example that you can point to? If I'm guessing correctly, it would probably fit in well with what Sutter suggests in Virtuality[^]. If every virtual function is private and invoked by a non-virtual function that is public, that provides a place to add pre- and post-code. But it would certainly have some limitations.
Robust Services Core | Software Techniques for Lemmings | Articles
-
> I'm curious as to what you meant by C++ changing your attitude towards objects. Maybe he meant that C++ is multiparadigm programming language as opposed to Java which selling point was that it's true and only OOP? "I actually never said that [C++ is an object oriented language]. It's always quoted, but I never did. I said that C++ supports object oriented programming and other techniques." Bjarne Stroustrup, Artificial intelligence Podcast 42:20 > What if it had to be big? Do it in OOP and you will make it big. "Once you reach a particular size, anything beyond that is no longer a reflection of functionality." Kevlin Henney, GOTO 2016 • Small Is Beautiful 55:40 The Facebook iOS app has over 18000 classes. How do you compare it to Quake 3 that can render 30FPS of a 3D world on Pentium 3? My guess is that the Quake team could have developed that fb app with 5 or no classes in 10x less time, the app being 10x less buggy and working at 10x speed vs the current iOS fb app. At the same time the iOS FB team could hardly construct a PAC-MAN type game. (No disrespect to pac-man coderz) It is always supposed by OOP supporters that the only pure moral way to write code is OOP and that elite developers do only OOP.
sickfile wrote:
Do it in OOP and you will make it big.
You might have a lot of boilerplate, which is a PITA but not "big". It would be big if, say, functional programming was a much better fit for the problem.
Quote from Kevlin Henney:
"Once you reach a particular size, anything beyond that is no longer a reflection of functionality."
It's often true that inside a big system, there's a small system struggling to get out. But as a blanket statement, this quote is just a platitude.
sickfile wrote:
The Facebook iOS app has over 18000 classes. How do you compare it to Quake 3 that can render 30FPS of a 3D world on Pentium 3?
18000 classes is a joke. But you can't compare it to the portion of a game that renders graphics, which is highly algorithmic and doesn't require much OO, although your point might be that this wouldn't stop some people from trying to do it that way.
Robust Services Core | Software Techniques for Lemmings | Articles
-
But also, with the failure rate of software I'm glad we don't build bridges and skyscrapers. :laugh:
Real programmers use butterflies
Software failure isn't caused by the tools or paradigms used to develop it - it's cased the the programmers not doing it right.
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013 -
Software failure isn't caused by the tools or paradigms used to develop it - it's cased the the programmers not doing it right.
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013#realJSOP wrote:
it's caused by the the programmers not doing it right.
exactly.
-
sickfile wrote:
Do it in OOP and you will make it big.
You might have a lot of boilerplate, which is a PITA but not "big". It would be big if, say, functional programming was a much better fit for the problem.
Quote from Kevlin Henney:
"Once you reach a particular size, anything beyond that is no longer a reflection of functionality."
It's often true that inside a big system, there's a small system struggling to get out. But as a blanket statement, this quote is just a platitude.
sickfile wrote:
The Facebook iOS app has over 18000 classes. How do you compare it to Quake 3 that can render 30FPS of a 3D world on Pentium 3?
18000 classes is a joke. But you can't compare it to the portion of a game that renders graphics, which is highly algorithmic and doesn't require much OO, although your point might be that this wouldn't stop some people from trying to do it that way.
Robust Services Core | Software Techniques for Lemmings | Articles
Greg Utas wrote:
although your point might be that this wouldn't stop some people from trying to do it that way.
as in IoT... only because you can, doesn't mean you should :rolleyes: :-D
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
-
Disclaimer: Unpopular opinion A lot of coders spend a lot of lines of code dividing things into tiny steps which they then make whole classes for and abstract everything to the Nth degree, often even when the abstraction is not helpful. Back when I was a green coder, I used to write OO code somewhat like this. Then C++ changed me. I stopped relying on objects so much. This bled over into other languages. Now my code is about expedience. For example, I created a little HTTP server that does the request/response cycle in a single method, with two support structs instead of a dozen classes. My code is smaller, faster, easy enough to understand if you aren't a beginner and overall better for it. It's getting to the point where I think OO is an ill conceived paradigm - and not even because it's Broken As Designed (it's not) but because it gets way overused to the point where the dev world may have been better off with something else.
Real programmers use butterflies
I have a Console application that disagrees :D
using System;
namespace ConsoleApp1
{
class Program
{
static void Main()
{
IMessageGetter messageGetter = new BoohCodewitchMessageGetter();
IMessagePrinter messagePrinter = new ConsoleMessagePrinter();
IInputAwaiter inputAwaiter = new ConsoleInputAwaiter();
string message = messageGetter.GetMessage();
messagePrinter.PrintMessage(message);
inputAwaiter.AwaitInput();
}
}public interface IMessageGetter { string GetMessage(); } public interface IMessagePrinter { void PrintMessage(string message); } public interface IInputAwaiter { void AwaitInput(); } public abstract class BaseMessageGetter : IMessageGetter { public abstract string GetMessage(); } public abstract class BaseMessagePrinter : IMessagePrinter { public abstract void PrintMessage(string message); } public abstract class BaseInputAwaiter : IInputAwaiter { public abstract void AwaitInput(); } public class BoohCodewitchMessageGetter : BaseMessageGetter { public override string GetMessage() => "Booh codewitch, your opinion sucks!"; } public class ConsoleMessagePrinter : BaseMessagePrinter { public override void PrintMessage(string message) => Console.WriteLine(message); } public class ConsoleInputAwaiter : BaseInputAwaiter { public override void AwaitInput() => Console.ReadKey(); }
}
Best, Sander sanderrossel.com Migrating Applications to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript Object-Oriented Programming in C# Succinctly
-
I have a Console application that disagrees :D
using System;
namespace ConsoleApp1
{
class Program
{
static void Main()
{
IMessageGetter messageGetter = new BoohCodewitchMessageGetter();
IMessagePrinter messagePrinter = new ConsoleMessagePrinter();
IInputAwaiter inputAwaiter = new ConsoleInputAwaiter();
string message = messageGetter.GetMessage();
messagePrinter.PrintMessage(message);
inputAwaiter.AwaitInput();
}
}public interface IMessageGetter { string GetMessage(); } public interface IMessagePrinter { void PrintMessage(string message); } public interface IInputAwaiter { void AwaitInput(); } public abstract class BaseMessageGetter : IMessageGetter { public abstract string GetMessage(); } public abstract class BaseMessagePrinter : IMessagePrinter { public abstract void PrintMessage(string message); } public abstract class BaseInputAwaiter : IInputAwaiter { public abstract void AwaitInput(); } public class BoohCodewitchMessageGetter : BaseMessageGetter { public override string GetMessage() => "Booh codewitch, your opinion sucks!"; } public class ConsoleMessagePrinter : BaseMessagePrinter { public override void PrintMessage(string message) => Console.WriteLine(message); } public class ConsoleInputAwaiter : BaseInputAwaiter { public override void AwaitInput() => Console.ReadKey(); }
}
Best, Sander sanderrossel.com Migrating Applications to the Cloud with Azure arrgh.js - Bringing LINQ to JavaScript Object-Oriented Programming in C# Succinctly
*headdesk*
Real programmers use butterflies
-
Disclaimer: The Big Brother is watching you! There was a time when at least you could lose your job for such claims, at worst you could have get killed by an angry mob of mostly rookie developers who want to show of. I remember how much impressed I was with multiple inheritance, assignment overloading and copy constructors... One day I realized what I have always known as a kid. Programming is data processing. "in C++ as in Simula a class is a user defined type." "Every language that uses the word class, for type, is a descendent of Simula" Bjarne Stroustrup They should have called OOP - class oriented developing, because it's appealing to class obsessed chauvinists. Contrary to popular belief, objects are only data. You could have a pointer to an array of pointers to functions here and there or a reference to a function, but that's data too. No matter what language you use it all gets down to the same assembly language. Even before that, in the compilation process, programs are translated to a common language neutral data representation. So, for EVERY program in Java you could write a program in C that gets translated into the same assembly code the CPU will execute. But, you could hardly write a Java program for ANY C program that will be translated into the same assembly code. "The very first Java compiler was developed by Sun Microsystems and was written in C using some libraries from C++. Today, the Java compiler is written in Java, while the JRE is written in C." "The Sun JVM is written in C" Provided as is from Stackoverflow. C implements Java, but Java cannot implement C. Back to topic, this is what I find most appealing. "We don’t have a mathematical model for OOP. We have Turing machines for imperative (procedural) programming, lambda-calculus for functional programming and even pi-calculus (and CSP by C.A.R. Hoare again and other variations) for event-based and distributed programming, but nothing for OOP. So the question of “what is a ‘correct’ OO program?”, cannot even be defined; (much less, the answer to that question.)" It was given as an answer at Quora to the question 'Why did Dijkstra say that “Object-oriented programming is an exceptionally bad idea which could only have originated in California.?"' Greetings.
sickfile wrote:
We don’t have a mathematical model for OOP
That's an extremely good point. To be fair, as I've said elsewhere on the thread I use OO in places - like if I expose an API to whatever i'm writing that will often be OO. And I tend to use OO here and there for other reasons when I'm stuck in a hard OO env like Java or C# I limit its use though: 1. Does it help explain the code? 2. Does it work with the rest of the code rather than against it? 3. Does it encapsulate an abstraction such that it makes it simpler to employ? There are so many times when the answers to those questions are no, and I see people using objects. See @SanderRossel's console app upthread - he was ribbing me but it's a good example of class misuse.
Real programmers use butterflies
-
Greg Utas wrote:
although your point might be that this wouldn't stop some people from trying to do it that way.
as in IoT... only because you can, doesn't mean you should :rolleyes: :-D
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
Nelek wrote:
as in IoT
You mean my wifi enabled AI toaster is overkill? :-D
Real programmers use butterflies
-
Software failure isn't caused by the tools or paradigms used to develop it - it's cased the the programmers not doing it right.
".45 ACP - because shooting twice is just silly" - JSOP, 2010
-----
You can never have too much ammo - unless you're swimming, or on fire. - JSOP, 2010
-----
When you pry the gun from my cold dead hands, be careful - the barrel will be very hot. - JSOP, 2013I didn't intend to imply otherwise, despite the overarching topic. I switched gears.
Real programmers use butterflies
-
I think I see what you're getting at. Is there an example that you can point to? If I'm guessing correctly, it would probably fit in well with what Sutter suggests in Virtuality[^]. If every virtual function is private and invoked by a non-virtual function that is public, that provides a place to add pre- and post-code. But it would certainly have some limitations.
Robust Services Core | Software Techniques for Lemmings | Articles
I'd have to dig up some of old code off of my one drive if it's there. i wasn't using github until more recently. People usually don't describe it the way I do. It's a technique I picked up while doing research into making a rather ambitious business integration system with COM+ like features. Someone demonstrated cross cutting functionality using the Curiously recurring template pattern - Wikipedia[^] It gave me one of those aha moments, and since, whenever I see a CRTP like above, i half expect it. Dr. Dobbs has an article about doing cross cutting but they don't use generic programming to do it. <h1>Aspect-Oriented Programming & C++</h1> | Dr Dobb's[^] But if you poke at it you can see there's opportunities for factoring into a template
Real programmers use butterflies
-
TLDR; Most excesses are bad.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
fair enough. it just seems like a popular thing to do in this case. i see it all over with .NET projects.
Real programmers use butterflies
-
honey the codewitch wrote:
But also, with the failure rate of software I'm glad we don't build bridges and skyscrapers.
The % of "so-called" programmers is much, much, much bigger than the % of architechts.
M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you Rating helpful answers is nice, but saying thanks can be even nicer.
also CS is only somewhat engineering. It's half voodoo. :laugh:
Real programmers use butterflies
-
Nelek wrote:
as in IoT
You mean my wifi enabled AI toaster is overkill? :-D
Real programmers use butterflies
It will be when someone hacks into it to burn your house down. :laugh:
Robust Services Core | Software Techniques for Lemmings | Articles
-
sickfile wrote:
We don’t have a mathematical model for OOP
That's an extremely good point. To be fair, as I've said elsewhere on the thread I use OO in places - like if I expose an API to whatever i'm writing that will often be OO. And I tend to use OO here and there for other reasons when I'm stuck in a hard OO env like Java or C# I limit its use though: 1. Does it help explain the code? 2. Does it work with the rest of the code rather than against it? 3. Does it encapsulate an abstraction such that it makes it simpler to employ? There are so many times when the answers to those questions are no, and I see people using objects. See @SanderRossel's console app upthread - he was ribbing me but it's a good example of class misuse.
Real programmers use butterflies
"We don't have a mathematical model for OOP" sounds like a lament from a formal methods fanboi, in which case the objection can be ignored.
Robust Services Core | Software Techniques for Lemmings | Articles
-
"We don't have a mathematical model for OOP" sounds like a lament from a formal methods fanboi, in which case the objection can be ignored.
Robust Services Core | Software Techniques for Lemmings | Articles
heh. I look at this way - if we don't have a mathematical model for it then we're limited in the sorts of transformations we can do the code. Why would anyone want to transform code? A compiler does just that. A mathematical model lends itself to rigorous checking as well. I'm not a purist about it, but I certainly see the advantages of it and it's one of the reasons I'm fond of functional programming.
Real programmers use butterflies
-
*headdesk*
Real programmers use butterflies
That code is uber 1337! :D But usually... TL;DR: I agree with your post. The long version: I tend to write a bunch of interfaces (as necessary) that explain the function of the code. Take, for example, an IUserRepository. When I see a (ASP.NET Core) Controller being injected with an IUserRepository I know this Controller does something with users. I don't know (or care) where the users come from, but I know I need them. If you look at the specific code that uses the IUserRepository you'll find stuff like userRepository.GetUser(id), which is way more descriptive than some code that accesses a database. So in that sense, I often use classes and methods to describe what my code is doing. That, for me, and to lesser extent re-use of code, are the biggest pros of OOP. I'm not a big fan of re-use anymore. Back in the day I re-used all the things, but just because two pieces of code incidentally need the same results doesn't mean they do the same thing. I now make a clear split of functional re-use and technical re-use. Functional re-use is rare, because that would mean a user has two ways to do the exact same thing. It happens, but not all that often. I think I write my code less "OOP" than seven or even five years ago. The OOP I still write is more architectural in nature (like I now make heavy use of DI and interfaces, but not so much of base classes and such). I've written some simple programs in Haskell, a purely functional language, but I think that doesn't work all that well. It comes natural to think in objects and to have side effects at some point. Nevertheless I started to write more functional in my OOP code, mostly no side effects. I'm pretty sure my bug-to-code ratio went down since I've employed the no side effects approach. A function just does its thing and produces a result, but it won't affect the overall flow or state of the program. All the results come together in the calling function, mostly a controller, and then I do all the side effects in one spot. Makes the code a lot easier to read and you have a lot less to think about. It's still OOP, so it doesn't always work like that, but I try when I can. Another change in my code is the use of delegates instead of one-function interfaces. Makes for less abstraction and classes and it's still easy to read. The biggest game changer for me, and this saved me a lot of bugs, was when I started to use curly braces for one line if and loop statements though :D
Best, Sander
-
But also, with the failure rate of software I'm glad we don't build bridges and skyscrapers. :laugh:
Real programmers use butterflies
honey the codewitch wrote:
I'm glad we don't build bridges and skyscrapers.
We've been building physical structures for thousands of years, and writing software for less than 80. Architecture and civil engineering are obviously more mature disciplines than software engineering. Assuming civilization survives, I am certain that our software development efforts will be viewed by future engineers in the same manner that the builders of mud huts are viewed by modern civil engineers. (But we do build some very impressive mud huts! :-\ )
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
-
honey the codewitch wrote:
I'm glad we don't build bridges and skyscrapers.
We've been building physical structures for thousands of years, and writing software for less than 80. Architecture and civil engineering are obviously more mature disciplines than software engineering. Assuming civilization survives, I am certain that our software development efforts will be viewed by future engineers in the same manner that the builders of mud huts are viewed by modern civil engineers. (But we do build some very impressive mud huts! :-\ )
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows. -- 6079 Smith W.
Software techniques will undoubtedly improve, but there are things from 45 years ago that I don't find primitive. I think software will always be difficult because, unlike engineers, we continually evolve existing products and, unlike mechanics, we repair them while they are up and running.
Robust Services Core | Software Techniques for Lemmings | Articles