ok what are the rules
-
Shog9 wrote:
The Grand Negus wrote: Polymorphism built into the current Plain English compiler. And polymorphism is one of the tenants of OO as well. Indeed, while it is quite useful in a purely procedural language, it is nearly essential in an OO one, as without it you quickly end up with: The Grand Negus wrote: "number.converttostring" or "string.converttonumber" or "abstract.convert(number,string)" string var = number There's no reason why the assignment operator shouldn't act differently based on context.
Yes, yes. But you've hidden the problem, not solved it. To do the conversion, does the assignment operator call "number.converttostring" or "string.converttonumber" or "abstract.convert(number,string)"? In other words, where is the conversion function defined? (And please don't say "under the assignment operator object" because there are clearly two different operations here: assignment of a value to a compatible container is not the same thing as the conversion of a value from one representation to another.)
The Grand Negus wrote:
To do the conversion, does the assignment operator call "number.converttostring" or "string.converttonumber" or "abstract.convert(number,string)"? In other words, where is the conversion function defined?
Does it matter? This is a concern for the implementation, but why should the programmer wishing to convert a number to a string care about such details? Personally, i wouldn't implement it as a member of either string or number - such status should be reserved for operations clearly in one domain or the other, which a conversion is not. I also feel it's important to distinguish a conversion (which should be reversible with no loss of information if at all possible) and string formatting (which should offer much more control over the output). I strongly feel conversion operators to be a weak spot in OO languages such as Java or C#, and much prefer the C++ design (which offers a rich set of options, including the ability to define a conversion procedure external to any class).
---- I just want you to be happy; That's my only little wish...
-
The Grand Negus wrote:
But the first rule given us by the Father of Object Oriented Programming, Alan Kay, is this: "Everything is an object".
As alluded to in my previous post, you don't have to follow the "rules" slavishly or see the problem filtered through only one padagram. The fact that you need not be a slave to a particular padagram doesn't diminish its usefulness however.
Steve
Stephen Hewitt wrote:
The fact that you need not be a slave to a particular padagram doesn't diminish its usefulness however.
It does if the language you're using, say C#, demands that paradigm. And when the other road is taken - support all paradigms - you end up with a kludge like C++ whose creator admits is "too large for most programmers to master".
-
there were few discussions about rules for programming few days ago i am working in a company which is newly started and only two programmers there and no one to guide except CP so what are the rules which you follow and think i should also follow :):)
I'll throw in my two cents. First, I don't like the book Code Complete, though I do suggest reading it once in case it has a good idea you haven't thought of. (Like all books on software methodology, it is excessively dogmatic. It's also a little outdated and, unfortunately, does contain some really bad advice; the problem is it's all so jumbled together it's not clear what the good and bad advice is.) Some of my rules: * Know your tools, especially your debugger and how to search MSDN. * Know your APIs. If you're doing C#.NET, spend an hour each day actually reading through the .NET documentation cover-to-cover. The time will come when you will remember that .NET does something that you now don't have to write and you can look it up. * Know your customer. Treat them as they are always right, even if they aren't. * Hire the best UI designers and do truly objective usability testing. (Rigged usability testing is worse than no testing at all.) * Be pragmatic and flexible. No matter how good your design, how well you have implemented your code, how efficient your process has been, you will never have enough time or resources. The day will come when you will spend seemingly endless days hacking your code just to get it to work. You may even find that your design is so fundamentally flawed you have to throw all your hard work away and start over. * Most "Software Engineers" aren't. Most also overestimate their own capabilities and just how hard software development is. * Commercial software takes significantly longer to develop than that destined for in-house use. * Hire a technical writer early and involve them in all engineering and planning meetings. Have them summarize the results and ensure everyone signs off on them. (Okay, I've had this happen precisely once, but the results were very positive.) * Testing should never answer to the same manager and/or technical leads as the developers. * If you have less than five years solid experience in something, you aren't an expert. Don't claim you are. * In conjunction with this, if someone who IS a recognized expert gives you advice in response to a direct question, listen to them. (Don't spend days or even weeks trying to write code for which a single API is already provided in Win32. Yup, had a developer, the same developer, do this to me three times and yes, he once really did spend two weeks before meekly asking me again what the API was.) * Have one person in charge who has a vision of what the product will be and hold them a
-
Stephen Hewitt wrote:
The fact that you need not be a slave to a particular padagram doesn't diminish its usefulness however.
It does if the language you're using, say C#, demands that paradigm. And when the other road is taken - support all paradigms - you end up with a kludge like C++ whose creator admits is "too large for most programmers to master".
I agree with your statement about C#. I'm fond of C++ however. I'll admit I'm biased (as I'm a C++ programmer) but a multi-padagram language like C++ is just what is needed if you're going to use multiple programming padagrams. I don't agree with your statement that, "you end up with a kludge like C++". It is a hard language to master but flexibility has a price and you can make a mess in any language; plain English included. You don't have to use or understand every language feature to use it (the langauge). Many people use C++ without writing template code for example. The longer you use a langauge the more features you tend to use.
Steve
-
Jeremy Falcon wrote:
I challenge you to flex your brain power and show all of us real (not abstract) reasons as to why you think this is the case. And I even double challenge you to do without talking about PEP. Keep in mind. C is my favorite language, and I believe procedural code can be very organized. But, I also believe OOP has many merits and don't hesitate to use it if the project calls for it. So, you have your challenge. Should you take it or leave us up to you, but since you act like an expert in this field I'd wager this would be like falling off a log. Remember, abstract ideas don't count, those are too easily manipulated to serve an agenda.
How about this argument. We wrote an exceptionally efficient native-code-generating compiler/linker with interface, file manager, dumper, text editor, and wysiwyg page editor using exclusively procedural code and not once during the development did the project suffer from disorganization, unreliability, or unnatural expression and not once during the development were we ever even tempted to think in an object-oriented paradigm. If that doesn't do it for you, I don't think a handful of posts here will help. When I first started teaching database design many years ago, I wrote into my materials an appendix explaining why the hierarchical and network approaches to database management were less desirable. I soon found, however, that once students mastered the simple and obvious relational approach taught in the course, the appendix ceased to be of interest; so I delete it. I think the same thing applies here. If you have a particular example where you think object-oriented thinking works better than a procedural approach, however, I'll be happy to dissect it.
The Grand Negus wrote:
How about this argument. We wrote an exceptionally efficient native-code-generating compiler/linker with interface, file manager, dumper, text editor, and wysiwyg page editor using exclusively procedural code and not once during the development did the project suffer from disorganization, unreliability, or unnatural expression and not once during the development were we ever even tempted to think in an object-oriented paradigm. If that doesn't do it for you, I don't think a handful of posts here will help.
You counter Jeremy with this argument, can't you accept his challenge?
Some people have a memory and an attention span, you should try them out one day. - Jeremy Falcon
-
Amar Chaudhary wrote:
i did program in foxpro using procedural approach and i know that i missed oop that time so much yes i build fairly complex programs using it but if had support of oop then it would take much less time so my point is 1) oop saves time 2) easy to debug 3) reduces complexity 4) make code easily understandable 5) and in the process of evolution oop is winning 6) and why is that more people are using oop concepts 7) when every thing is an object how you can escape oop and the big thing do you know why dinosaurs extinct
If your birthdate here is correct, you are about half my age. Which means I remember things - lived through things - that you haven't. I remember, for example, when General Motors was the clear winner in the evolution of the automobile industry, and the thought of a Japanese car on American highways was nothing but a joke. More to the point, however, I remember when the hierarchical/network approach to database was almost universally accepted as the best. In the "process of evolution", as you call it, this approach was not only winning, but had virtually won; it was backed by IBM and every other major player at the time and no one else stood a chance. But then along came Dr. Codd with a five-page paper describing the "spartan simplicity" of his relational approach, and things changed. But not right away. I quote the dedication found in his final book, written some 25 years later: "To fellow pilots and aircrew in the Royal Air Force during World War II and the dons at Oxford. These people were the source of my determination to fight for what I believed was right during the ten or more years in which government, industry, and commerce were strongly opposed to the relational approach to database management." I suspect I'll be writing a similar dedication to my final work 25 years from now. Now regarding the dinosaurs, let me be blunt. Clearly, you're not old enough, nor have you studied enough, to give me an accurate history of trends and events in data processing just 50 years past. So don't go pretending you know what happened thousands of years ago. For all you know, the dinosaurs might have been destroyed in a cataclysmic flood, and evolution wasn't even a factor.
yes you are true that i didn't read or not see as many things as you did but i don't blindly believe what is been said and don't just stick to one thing i am open for changes and not rigid for what i believe i can see things from different perspective and changes my self with time
The Grand Negus wrote:
when General Motors was the clear winner in the evolution of the automobile industry, and the thought of a Japanese car on American highways was nothing but a joke. More to the point, however, I remember when the hierarchical/network approach to database was almost universally accepted as the best. In the "process of evolution", as you call it, this approach was not only winning, but had virtually won; it was backed by IBM and every other major player at the time and no one else stood a chance. But then along came Dr. Codd with a five-page paper describing the "spartan simplicity" of his relational approach, and things changed.
yes that happened but you see a fact that some thing more flexible and some thing new has taken over procedural coding had its own golden but it gets a fair competitor in designing and users got a net tool to work with few user just denied to use the new tool and stick to the older one saying that it will not work ( oop is whatever you say) and some people used them both the difference can be seen it the market the simply procedural languages changes ( implement the concepts of oop ) to survive now dinosaurs do you believe few floods killed them all or it was there inability to change as you are more experienced than me have you heard of ice age humans survived from it any many more disasters by the way we were talking about oop and procedural ok this time my point is the price difference between the two (supporting languages) :)
it is good to be important but it is more important to be good
-
The Grand Negus wrote:
To do the conversion, does the assignment operator call "number.converttostring" or "string.converttonumber" or "abstract.convert(number,string)"? In other words, where is the conversion function defined?
Does it matter? This is a concern for the implementation, but why should the programmer wishing to convert a number to a string care about such details? Personally, i wouldn't implement it as a member of either string or number - such status should be reserved for operations clearly in one domain or the other, which a conversion is not. I also feel it's important to distinguish a conversion (which should be reversible with no loss of information if at all possible) and string formatting (which should offer much more control over the output). I strongly feel conversion operators to be a weak spot in OO languages such as Java or C#, and much prefer the C++ design (which offers a rich set of options, including the ability to define a conversion procedure external to any class).
---- I just want you to be happy; That's my only little wish...
Shog9 wrote:
Does it matter? This is a concern for the implementation, but why should the programmer wishing to convert a number to a string care about such details?
Because, I thought, we were talking about the guy was implementing the function as well as the guy who uses it. It's easy to make a case for something if you leave out one whole side of the story.
Shog9 wrote:
Personally, i wouldn't implement it as a member of either string or number - such status should be reserved for operations clearly in one domain or the other, which a conversion is not.
Agreed, but the "personally" at the beginning of your sentence supports my point - with objects there's a choice and the "right" answer isn't clear (to all). With Plain English, the question never arises. Furthermore, there are many such operations that are not "clearly in one domain or the other". Is "Write a string on the console" part of the string domain, or the console domain, or both, or neither? How about "Write a string to the printer"? In a true object-oriented language, such operations, if not placed under something, require the addition of "abstract" constructs and additional keywords, etc. But Plain English handles all of these cases, naturally and efficiently, with no additional parts. And the guy with the fewest parts wins, right, Occam?
Shog9 wrote:
I also feel it's important to distinguish a conversion (which should be reversible with no loss of information if at all possible) and string formatting (which should offer much more control over the output).
Okay with us. In the current version of Plain English, we typically use "put" for assignments (with any necessary, implied, reversable conversions); we use the word "convert" otherwise, and underneath the "puts". So "put 3 into a string" will include an automatic call to the appropriate "convert" function; "convert a number to pdf em units given an emsquare number and a font" does the conversion directly. Both statements compile and run as you see them.
Shog9 wrote:
I strongly feel conversion operators to be a weak spot in OO languages such as Java or C#, and much prefer the C++ design (which offers a rich set of options, including the ability to define a conversion procedure external to any class).
As I said in another p
-
Shog9 wrote:
The Grand Negus wrote: Polymorphism built into the current Plain English compiler. And polymorphism is one of the tenants of OO as well. Indeed, while it is quite useful in a purely procedural language, it is nearly essential in an OO one, as without it you quickly end up with: The Grand Negus wrote: "number.converttostring" or "string.converttonumber" or "abstract.convert(number,string)" string var = number There's no reason why the assignment operator shouldn't act differently based on context.
Yes, yes. But you've hidden the problem, not solved it. To do the conversion, does the assignment operator call "number.converttostring" or "string.converttonumber" or "abstract.convert(number,string)"? In other words, where is the conversion function defined? (And please don't say "under the assignment operator object" because there are clearly two different operations here: assignment of a value to a compatible container is not the same thing as the conversion of a value from one representation to another.)
By the way: you might get a kick out of this interview with Stroustrup. Lots of grumbling about people who insist on putting everything in a class hierarchy. The C++ Style Sweet Spot[^]:
I've been preaching this song for the better part of 20 years. But people got very keen on putting everything in classes and hierarchies. I've seen the Date problem solved by having a base class Date with some operations on it and the data protected, with utility functions provided by deriving a new class and adding the utility functions. You get really messy systems like that, and there's no reason for having the utility functions in derived classes. You want the utility functions to the side so you can combine them freely. How else do I get your utility functions and my utility functions also? The utility functions you wrote are independent from the ones I wrote, and so they should be independent in the code. If I derive from class Date, and you derive from class Date, a third person won't be able to easily use both of our utility functions, because we have built dependencies in that didn't need to be there. So you can overdo this class hierarchy stuff.
:)
---- I just want you to be happy; That's my only little wish...
-
I agree with your statement about C#. I'm fond of C++ however. I'll admit I'm biased (as I'm a C++ programmer) but a multi-padagram language like C++ is just what is needed if you're going to use multiple programming padagrams. I don't agree with your statement that, "you end up with a kludge like C++". It is a hard language to master but flexibility has a price and you can make a mess in any language; plain English included. You don't have to use or understand every language feature to use it (the langauge). Many people use C++ without writing template code for example. The longer you use a langauge the more features you tend to use.
Steve
-
By the way: you might get a kick out of this interview with Stroustrup. Lots of grumbling about people who insist on putting everything in a class hierarchy. The C++ Style Sweet Spot[^]:
I've been preaching this song for the better part of 20 years. But people got very keen on putting everything in classes and hierarchies. I've seen the Date problem solved by having a base class Date with some operations on it and the data protected, with utility functions provided by deriving a new class and adding the utility functions. You get really messy systems like that, and there's no reason for having the utility functions in derived classes. You want the utility functions to the side so you can combine them freely. How else do I get your utility functions and my utility functions also? The utility functions you wrote are independent from the ones I wrote, and so they should be independent in the code. If I derive from class Date, and you derive from class Date, a third person won't be able to easily use both of our utility functions, because we have built dependencies in that didn't need to be there. So you can overdo this class hierarchy stuff.
:)
---- I just want you to be happy; That's my only little wish...
-
Shog9 wrote:
Does it matter? This is a concern for the implementation, but why should the programmer wishing to convert a number to a string care about such details?
Because, I thought, we were talking about the guy was implementing the function as well as the guy who uses it. It's easy to make a case for something if you leave out one whole side of the story.
Shog9 wrote:
Personally, i wouldn't implement it as a member of either string or number - such status should be reserved for operations clearly in one domain or the other, which a conversion is not.
Agreed, but the "personally" at the beginning of your sentence supports my point - with objects there's a choice and the "right" answer isn't clear (to all). With Plain English, the question never arises. Furthermore, there are many such operations that are not "clearly in one domain or the other". Is "Write a string on the console" part of the string domain, or the console domain, or both, or neither? How about "Write a string to the printer"? In a true object-oriented language, such operations, if not placed under something, require the addition of "abstract" constructs and additional keywords, etc. But Plain English handles all of these cases, naturally and efficiently, with no additional parts. And the guy with the fewest parts wins, right, Occam?
Shog9 wrote:
I also feel it's important to distinguish a conversion (which should be reversible with no loss of information if at all possible) and string formatting (which should offer much more control over the output).
Okay with us. In the current version of Plain English, we typically use "put" for assignments (with any necessary, implied, reversable conversions); we use the word "convert" otherwise, and underneath the "puts". So "put 3 into a string" will include an automatic call to the appropriate "convert" function; "convert a number to pdf em units given an emsquare number and a font" does the conversion directly. Both statements compile and run as you see them.
Shog9 wrote:
I strongly feel conversion operators to be a weak spot in OO languages such as Java or C#, and much prefer the C++ design (which offers a rich set of options, including the ability to define a conversion procedure external to any class).
As I said in another p
The Grand Negus wrote:
Because, I thought, we were talking about the guy was implementing the function as well as the guy who uses it. It's easy to make a case for something if you leave out one whole side of the story.
If we can accept that the conversion be made implicit based on context, then it doesn't matter what the guy implementing it does. He might put the conversion under a class or namespace hierarchy, standalone, or even build it into the compiler as a block of anonymous machine code spit out wherever such a conversion is required. It shouldn't make a bit of difference to the user.
The Grand Negus wrote:
In a true object-oriented language
Ah, well - i've no use for a pure OO language. I'm sure such things are of academic interest, but such constraints do little for me. OO is great in certain areas, for certain tasks... but i've no interest in trying to make everything an object.
---- I just want you to be happy; That's my only little wish...
-
The Grand Negus wrote:
I think it's the term "utility function" that gives the lie to the object approach.
At the end of the day, you still need something to get the work done. It's at the point where it stops being a useful organizing technique and starts to intrude upon my efforts to actually accomplish anything that i abandon OO.
---- I just want you to be happy; That's my only little wish...
-
But think a moment. English can be used to write anything from a love letter, to a post on CodeProject, to a native-code generating compiler. Why bother with anything else?
The Grand Negus wrote:
But think a moment. English can be used to write anything from a love letter, to a post on CodeProject, to a native-code generating compiler. Why bother with anything else?
For the same reason mathematicians don't: for some purposes English is either too verbose, too vague (open to many interpretation), too hard to manipulate or all three. In a mathematical proof for example there’ll be both English and formal symbolic notation. It’s not a matter of one being better then the other: just that they both have their strengths and weaknesses and you have to know when to use which. It’s similar to the multi-padagram discussion we were having before; when all you've got is a hammer, everything looks like a nail.
Steve
-
yes you are true that i didn't read or not see as many things as you did but i don't blindly believe what is been said and don't just stick to one thing i am open for changes and not rigid for what i believe i can see things from different perspective and changes my self with time
The Grand Negus wrote:
when General Motors was the clear winner in the evolution of the automobile industry, and the thought of a Japanese car on American highways was nothing but a joke. More to the point, however, I remember when the hierarchical/network approach to database was almost universally accepted as the best. In the "process of evolution", as you call it, this approach was not only winning, but had virtually won; it was backed by IBM and every other major player at the time and no one else stood a chance. But then along came Dr. Codd with a five-page paper describing the "spartan simplicity" of his relational approach, and things changed.
yes that happened but you see a fact that some thing more flexible and some thing new has taken over procedural coding had its own golden but it gets a fair competitor in designing and users got a net tool to work with few user just denied to use the new tool and stick to the older one saying that it will not work ( oop is whatever you say) and some people used them both the difference can be seen it the market the simply procedural languages changes ( implement the concepts of oop ) to survive now dinosaurs do you believe few floods killed them all or it was there inability to change as you are more experienced than me have you heard of ice age humans survived from it any many more disasters by the way we were talking about oop and procedural ok this time my point is the price difference between the two (supporting languages) :)
it is good to be important but it is more important to be good
Amar Chaudhary wrote:
now dinosaurs do you believe few floods killed them all or it was there inability to change as you are more experienced than me have you heard of ice age humans survived from it any many more disasters
There's a lot of evidence that the dinosaurs were unable to recover after a watery cataclysm. But it's hard to get good data from so far back. The problem with cataclysms is that organisms perfectly adapted to one environment are often not at all suited to another - like the environment that emerges following a cataclysm. It's like training yourself to be a chess champion and then having to deal with a bully in the park who kicks the board over. As Solomon said, "I have seen under the sun that the race is not always to the swift, nor the battle to the strong, nor bread to the wise, nor riches to men of understanding... but time and chance happens to them all".
Amar Chaudhary wrote:
ok this time my point is the price difference between the two (supporting languages)
I'm not sure what you're asking here. But if you're asking if we can write a program better, faster and cheaper in Plain English than in any other language, the answer is a definite "yes".
-
The Grand Negus wrote:
Because, I thought, we were talking about the guy was implementing the function as well as the guy who uses it. It's easy to make a case for something if you leave out one whole side of the story.
If we can accept that the conversion be made implicit based on context, then it doesn't matter what the guy implementing it does. He might put the conversion under a class or namespace hierarchy, standalone, or even build it into the compiler as a block of anonymous machine code spit out wherever such a conversion is required. It shouldn't make a bit of difference to the user.
The Grand Negus wrote:
In a true object-oriented language
Ah, well - i've no use for a pure OO language. I'm sure such things are of academic interest, but such constraints do little for me. OO is great in certain areas, for certain tasks... but i've no interest in trying to make everything an object.
---- I just want you to be happy; That's my only little wish...
-
The Grand Negus wrote:
I think it's the term "utility function" that gives the lie to the object approach.
At the end of the day, you still need something to get the work done. It's at the point where it stops being a useful organizing technique and starts to intrude upon my efforts to actually accomplish anything that i abandon OO.
---- I just want you to be happy; That's my only little wish...
-
Shog9 wrote:
but i've no interest in trying to make everything an object.
Good. But how about making everything Plain English? It's the language millions use every day to program their dogs!
But dogs have intelligence whereas computers don't. If you tell a computer to do something stupid it will go off and do the wrong thing at 3 GHz and possibly make a hell of a mess before you can stop it. A dog on the other hand will use his intelligence to read between the lines of your incomplete description (a dog probably wouldn’t understand a more rigid description anyway) and figure out what you actually want as opposed to what you said.
Steve
-
Amar Chaudhary wrote:
now dinosaurs do you believe few floods killed them all or it was there inability to change as you are more experienced than me have you heard of ice age humans survived from it any many more disasters
There's a lot of evidence that the dinosaurs were unable to recover after a watery cataclysm. But it's hard to get good data from so far back. The problem with cataclysms is that organisms perfectly adapted to one environment are often not at all suited to another - like the environment that emerges following a cataclysm. It's like training yourself to be a chess champion and then having to deal with a bully in the park who kicks the board over. As Solomon said, "I have seen under the sun that the race is not always to the swift, nor the battle to the strong, nor bread to the wise, nor riches to men of understanding... but time and chance happens to them all".
Amar Chaudhary wrote:
ok this time my point is the price difference between the two (supporting languages)
I'm not sure what you're asking here. But if you're asking if we can write a program better, faster and cheaper in Plain English than in any other language, the answer is a definite "yes".
The Grand Negus wrote:
we can write a program better, faster and cheaper in Plain English than in any other language, the answer is a definite "yes"
Uh huh, sure :rolleyes:
If you try to write that in English, I might be able to understand more than a fraction of it. - Guffa
-
The Grand Negus wrote:
But think a moment. English can be used to write anything from a love letter, to a post on CodeProject, to a native-code generating compiler. Why bother with anything else?
For the same reason mathematicians don't: for some purposes English is either too verbose, too vague (open to many interpretation), too hard to manipulate or all three. In a mathematical proof for example there’ll be both English and formal symbolic notation. It’s not a matter of one being better then the other: just that they both have their strengths and weaknesses and you have to know when to use which. It’s similar to the multi-padagram discussion we were having before; when all you've got is a hammer, everything looks like a nail.
Steve
Stephen Hewitt wrote:
In a mathematical proof for example there’ll be both English and formal symbolic notation. It’s not a matter of one being better then the other: just that they both have their strengths and weaknesses and you have to know when to use which.
Agreed. But note something important here. The framework of such a proof is almost always a natural language, like English. The formulae are written in a specialized sub-language of the natural language. In other words, English is "bigger" than mathematical notation. Not better, bigger. It's easy, for example, to think of American English including the way Americans typically write numbers or simple equations - it's hard to imagine the reverse. And that's what we're proposing regarding Plain English (and which we've spelled out in other places). Our Plain English Machine, the PAL 3000, will understand not only English, but various forms of formulae and other programming languages as well. But the machine's native tongue will be English. And we're emphasizing this part of the problem because, frankly, the other parts (how to parse equations and compile C#) have already been solved.
-
But dogs have intelligence whereas computers don't. If you tell a computer to do something stupid it will go off and do the wrong thing at 3 GHz and possibly make a hell of a mess before you can stop it. A dog on the other hand will use his intelligence to read between the lines of your incomplete description (a dog probably wouldn’t understand a more rigid description anyway) and figure out what you actually want as opposed to what you said.
Steve
Stephen Hewitt wrote:
If you tell a computer to do something stupid it will go off and do the wrong thing at 3 GHz and possibly make a hell of a mess before you can stop it. A dog on the other hand will use his intelligence to read between the lines of your incomplete description (a dog probably wouldn’t understand a more rigid description anyway) and figure out what you actually want as opposed to what you said.
Not always. When I was a kid, the drummer in our band liked to put a speaker at one end of a room, grab a microphone, and stand at the other end of the room: then call his dog. The poor beast would run in circles (at 3 Hz) in the middle of the room until one of the other band members would take pity and turn off the amplifier. What you say is a matter of degree, not kind. Our compiler, in many situations, can figure out what you actually want as opposed to what you said even in its current incarnation. For example, if you say "Draw a circle at the screen" instead of "on the screen", it will figure it out. If you tell it to draw a "frame", it will reduce "frame" to "rectangle" and call the appropriate routine. If you fail to specify a color, it will pick its favorite - not unlike a kid.