It's an OOP world... is it? ...still?
-
Sounds like a good advice to me... ...and I learned another Buzzword ;)
Anyone who wants to go back to spagetti code needs to be wacked over the head with a bat to improve their brain function. Yea, neither you, nor anyone else has suggested it. Just putting in my two cents. I don't know anything about functional programming, but breaking up code into smaller units is a good idea in any language. I had one case where a fellow student calculated it would take 6 months to run the computer through the possible combinations to solve the problem presented. I wrote code that found solutions and ran in 5 minutes. I had enough time to verify my code had problems and left class and the problem behind. I rewrote it later and that should run in 8 minutes on the same computers. (Runs in 2 minutes on my laptop.) I broke the problem into pieces, solved each one in order. I calculated if you just looped without any intelligence the loops would take quintillions of years. (7 to the 40th power loops.)
-
Take a look at this: http://www.schloerconsulting.com/quantum-computer-q-lisp-programming-language[^] And this: http://en.wikipedia.org/wiki/Quantum_programming[^] Do you know why Functional programming is the first choice for the next generation of computers? The answer is simple: because it's declarative and it's all about PROBABILITY (quantum computing is about probability), and functional programming is too good for calculate probability: http://www.schloerconsulting.com/ibm-speech-schloer-slovenia[^] When hardware changes too much like that, C, C++ or whatever language stay close to the machine will die. All are now moving to declarative languages. Finally, you may want to contact the author of this blog: http://axisofeval.blogspot.com/2011/01/why-lisp-is-big-hack-and-haskell-is.html[^] He may have loads of information for you.
Hey! Yo!
Thanks for the links It is very much appreciated :)
-
Wow, that's almost my sig right there in the title :D Although my sig isn't necessarily about programming...
It's an OO world.
public class Naerling : Lazy<Person>{
public void DoWork(){ throw new NotImplementedException(); }
}Honestly, I came across your sig several times already, must have stayed im my head somehow ;)
-
Don't hold your breath, I remember that was a common belief with a number of 4GL and app-builders in the 1980's.
What you say is right. But there is a big difference between these days and the old ones. In that stage the programming languages was not mature like the one these days. These days in the new programming languages when you want to sort an array all what you have to do is call the Sort method without knowing how it do it. I think using the technologies available these days we can jump for the next technology. And to be a little bit biased, we already building this such a technology in our company and it is good with a lot of features.
-- Hasan Al-Halabi Chief Operation Officer "COO" What's Next! for Business Solutions Queen Rania Str. Building 313, 4th Floor, Office 409 P.O.Box: 143882 Amman 11814, Jordan Mob: 962 7 97958819 Tel: 962 6 5334478 hasanhalabi@whats-nxt.com http://www.whats-nxt.com
-
hoernchenmeister wrote:
Into what direction should I turn my head to maybe fall in love with another paradigm?
My advice is to not fall in love with any paradigm - just use whatever is best for the problem you are trying to solve. I was trying to make everything pure OOP when I was young and silly, but then I learned it didn't make much sense. Nobody even agrees what "pure OOP" is. If you need a class hierarchy, go and make one; if a problem is better solved with a simple function, make one and don't feel guilty about it. There is even a buzzword for the approach I suggest: Multiparadigm Programming[^]
Honestly the main problem is people not knowing when to use OO. Objects are great, but not the solution to every problem. It also depends on your goals and the platform you're working on. To give an example. If you're writing code for a small embedded system you should be careful. Using OO is less effective than other approaches in terms of memory use. That's a well known fact. And don't even think about garbage collecting. Now you might say "just use a microcontroller with a larger memory". The additional cost on a single device is indeed small. Now step into a production environment where you make thousands of units. A few cents more is going to make a large difference. And don't start claiming it's a good idea cause Arduino sort of gets away with it. We also live in a world where power efficiency is becoming important as more and more devices are battery powered. More powerful microcontrollers and processors don't perform all too well in that aspect. The only solution is to program in C or go all the way for assembly. This leads to hard to manage code. But there doesn't seem to be much of a way in between. Due to the latter I've grown rather fond of VHDL (and other similar languages for that matter) lately. It's still quite different from OO programming but has a lot of similarities as well I find. Sadly it's not well suited for writing computer programs. Another issue is that most programmers are too bogged down in conventional programming paradigms to become any good at it. But this probably isn't very relevant to most people here. To sum up my argument: Use OO when it gives you an advantage. Don't when it overcomplicates things.
-
Honestly, I came across your sig several times already, must have stayed im my head somehow ;)
Glad I could make a small difference :)
It's an OO world.
public class Naerling : Lazy<Person>{
public void DoWork(){ throw new NotImplementedException(); }
}