Hungarian notation
-
I've noticed that the C# folks at Microsoft have promoted a different naming convention that uses no variable type prefix. At the same time, I've observed that it's now trendy for people to dislike Hungarian notation. When I first started Windows programming Hungarian was indeed strange to get used to. But then, so was the Windows API. However, these days when I look at variable names without it and am left to either guess or search through the code to determine what the variable type is, I find myself thinking that these variable names are only one step removed from the old Basic days of names such as A, B, etc. Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular? Is there technical reasoning behind it, or is it just a new generation who feels that they must do things differently than those who came before in order to proclaim their identity?
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
Lately, I have only favored perhaps implying the SCOPE of a variable, so m_ means it is a member variable of an object, s_ means it is static and local to the source file, and g_ means it is global - probably to the entire application. Lets you know what you are dealing with if you go to make changes. Other than that, give it a good name.
-
Slow day at the office Chris? Looking for a quick flamewar? :)
Christopher Duncan wrote:
Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular?
That of course is most certainly a matter of opinion. My opinion is that it's: a) neither straightforward or easy b) can easily get out of date, i.e. the int changes to a double, the char* changes to a string object, etc. c) it makes variable names look ugly - it's just not aesthetically pleasing to the eye d) everyone seems to have their own variations on it My rule is that if you can't figure out what the variable is from it's name, then it's probably got a bad name, and hungarian notation won't help much here, other than to add more letters to the name :)
¡El diablo está en mis pantalones! ¡Mire, mire! Real Mentats use only 100% pure, unfooled around with Sapho Juice(tm)! SELECT * FROM User WHERE Clue > 0 0 rows returned Save an Orange - Use the VCF! Techno Silliness
Jim Crafton wrote:
Looking for a quick flamewar?
Not at all! Just something I've been wondering about for quite some time now, figured this would be the most qualified group to ask. I probably should have added that just like the choice of languages and programmers editors, naming conventions are also a religious issue with no "right" or "wrong" to them. That said, I'm willing to bet that if the MS C# team had come up with a variable naming convention that required each variable to start with the number 42 that it would soon be the prevalent method used, and anything else would be considered old fashioned and uncool. :)
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
-
I've noticed that the C# folks at Microsoft have promoted a different naming convention that uses no variable type prefix. At the same time, I've observed that it's now trendy for people to dislike Hungarian notation. When I first started Windows programming Hungarian was indeed strange to get used to. But then, so was the Windows API. However, these days when I look at variable names without it and am left to either guess or search through the code to determine what the variable type is, I find myself thinking that these variable names are only one step removed from the old Basic days of names such as A, B, etc. Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular? Is there technical reasoning behind it, or is it just a new generation who feels that they must do things differently than those who came before in order to proclaim their identity?
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
I like Hungarian notation, and I use it in my C++ code. However, when I began to program in C#, I realized that it was OK to use HN for numbers and strings. Yet I found that using the "obj" prefix was redundant... since everything is an object. So I think that HN is kinda useless in C# except for numeric variables, since I can either begin to create new prefixes for many classes (i.e. strm, xml, evnt) and give birth to a monster, or I can simply avoid HN. The solution for me was obvious though: I stopped programming for C# and returned to good C++ :D
A polar bear is a bear whose coordinates has been changed in terms of sine and cosine. Personal Site
-
I guess since you can now just hover your mouse over a variable and a tooltip will show you the info you need, then Hungarian notion because a little redundant.
Michael CP Blog [^] Development Blog [^]
A reasonable point. However, not everyone writes code in the IDE. In fact, I'm continually surprised that anyone does. :)
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
-
Jim Crafton wrote:
Looking for a quick flamewar?
Not at all! Just something I've been wondering about for quite some time now, figured this would be the most qualified group to ask. I probably should have added that just like the choice of languages and programmers editors, naming conventions are also a religious issue with no "right" or "wrong" to them. That said, I'm willing to bet that if the MS C# team had come up with a variable naming convention that required each variable to start with the number 42 that it would soon be the prevalent method used, and anything else would be considered old fashioned and uncool. :)
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
Not at all; unless the compiler enforces it (in which case it's not a "convention" anyway), you can do as you wish.
-
In my opinion Hungarian notation was a bad idea to begin with. If I have make a class called Address, how do I define a variable of that type? adMyAddress? addrMyAddress? It is totally arbitrary anyway.. For built in types such as int, char, etc. it is annoying to. Suppose you code up your application using a variable "total" and decide to declare it as an int. So you have a variable called iTotal. Suppose later a requirement comes to make it a float. Now every occurance in the code needs to be changed to fTotal. It adds unnecessary overhead in my opinion, and makes your variable names ugly.
zoid ! wrote:
Suppose you code up your application using a variable "total" and decide to declare it as an int. So you have a variable called iTotal. Suppose later a requirement comes to make it a float. Now every occurance in the code needs to be changed to fTotal.
You're going to have to change all those references to a
float
anyway, so changing the name of the variable does not add any more work.
"Approved Workmen Are Not Ashamed" - 2 Timothy 2:15
"Judge not by the eye but by the heart." - Native American Proverb
-
I've noticed that the C# folks at Microsoft have promoted a different naming convention that uses no variable type prefix. At the same time, I've observed that it's now trendy for people to dislike Hungarian notation. When I first started Windows programming Hungarian was indeed strange to get used to. But then, so was the Windows API. However, these days when I look at variable names without it and am left to either guess or search through the code to determine what the variable type is, I find myself thinking that these variable names are only one step removed from the old Basic days of names such as A, B, etc. Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular? Is there technical reasoning behind it, or is it just a new generation who feels that they must do things differently than those who came before in order to proclaim their identity?
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
Christopher Duncan wrote:
Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular?
Because it was unnatural to begin with. It's the compiler's job - not the programmer's - to keep track of internal data types and to perform appropriate conversions as required. "Crucial information" at a low level is nothing but distracting noise at a high level. Which of these statements, for example, is simpler and clearer: Put the pen in the drawer. vPut aThe nPen pIn aThe nDrawer. Obviously, the first. And this technique - allowing the lower level systems to handle the details of data type recognition and conversion - obviously works or you wouldn't be able to understand this very post.
-
Christopher Duncan wrote:
Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular?
Suddenly? I've hated it since i first saw it - any excuse to ditch it is fine by me... FWIW: the way i heard it explained, The Mad Hungarian originally came up with The Notation as a way to convey meaning as to how the variable would be used. So integers that store coordinates get a different prefix than integers storing measurements which are different than loop counters... This actually makes a bit of sense, if you can be consistent. But the number of times i've seen that done correctly and consistently... well, i could probably count it on the fingers of one foot. Add in all the shitty code out there using incorrect or misleading prefixes, and it becomes an active hindrance. Also, it isn't really Intellisense friendly.
---- Scripts i’ve known... CPhog 1.8.2 - make CP better. Forum Bookmark 0.2.5 - bookmark forum posts on Pensieve Print forum 0.1.2 - printer-friendly forums Expand all 1.0 - Expand all messages In-place Delete 1.0 - AJAX-style post delete Syntax 0.1 - Syntax highlighting for code blocks in the forums
Shog9 wrote:
The Mad Hungarian originally came up with The Notation as a way to convey meaning as to how the variable would be used. So integers that store coordinates get a different prefix than integers storing measurements which are different than loop counters...
Yes, I'd read that too. Then everyone, including MS, got the wrong end of the stick and started using it to denote type.
-
Christopher Duncan wrote:
Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular?
Because it was unnatural to begin with. It's the compiler's job - not the programmer's - to keep track of internal data types and to perform appropriate conversions as required. "Crucial information" at a low level is nothing but distracting noise at a high level. Which of these statements, for example, is simpler and clearer: Put the pen in the drawer. vPut aThe nPen pIn aThe nDrawer. Obviously, the first. And this technique - allowing the lower level systems to handle the details of data type recognition and conversion - obviously works or you wouldn't be able to understand this very post.
The Grand Negus wrote:
Put the pen in the drawer. vPut aThe nPen pIn aThe nDrawer.
Drawer->PutPen()
would be better than both of them :) Because this doesn't really require a full understanding of English grammar and sentence semantics.Regards, Nish
Nish’s thoughts on MFC, C++/CLI and .NET (my blog)
Currently working on C++/CLI in Action for Manning Publications. (*Sample chapter available online*) -
I've noticed that the C# folks at Microsoft have promoted a different naming convention that uses no variable type prefix. At the same time, I've observed that it's now trendy for people to dislike Hungarian notation. When I first started Windows programming Hungarian was indeed strange to get used to. But then, so was the Windows API. However, these days when I look at variable names without it and am left to either guess or search through the code to determine what the variable type is, I find myself thinking that these variable names are only one step removed from the old Basic days of names such as A, B, etc. Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular? Is there technical reasoning behind it, or is it just a new generation who feels that they must do things differently than those who came before in order to proclaim their identity?
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
You don't follow our weekly polls, do you :) It is funny how people get passionate about such insignificant things as code styles. I tend to use whatever is "standard" for the technology I work on at any moment: for platform-independent or Unix-only C++ it is K&R style, for Windows system programming it is Hungarian, for .NET I follow the MS recommendations (also enforced by FxCop, IIRC), etc, etc...
-
The Grand Negus wrote:
Put the pen in the drawer. vPut aThe nPen pIn aThe nDrawer.
Drawer->PutPen()
would be better than both of them :) Because this doesn't really require a full understanding of English grammar and sentence semantics.Regards, Nish
Nish’s thoughts on MFC, C++/CLI and .NET (my blog)
Currently working on C++/CLI in Action for Manning Publications. (*Sample chapter available online*)Nishant Sivakumar wrote:
Drawer->PutPen() would be better than both of them
Not really - it is not the drawer that performs the operation :)
-
Nishant Sivakumar wrote:
Drawer->PutPen() would be better than both of them
Not really - it is not the drawer that performs the operation :)
Nemanja Trifunovic wrote:
Not really - it is not the drawer that performs the operation
Maybe :
Pen->PutInDrawer()
then :-)Regards, Nish
Nish’s thoughts on MFC, C++/CLI and .NET (my blog)
Currently working on C++/CLI in Action for Manning Publications. (*Sample chapter available online*) -
The Grand Negus wrote:
Put the pen in the drawer. vPut aThe nPen pIn aThe nDrawer.
Drawer->PutPen()
would be better than both of them :) Because this doesn't really require a full understanding of English grammar and sentence semantics.Regards, Nish
Nish’s thoughts on MFC, C++/CLI and .NET (my blog)
Currently working on C++/CLI in Action for Manning Publications. (*Sample chapter available online*)Nishant Sivakumar wrote:
Drawer->PutPen() would be better than both of them Because this doesn't really require a full understanding of English grammar and sentence semantics.
I hope you're kidding. First of all, show that statement to any non-programmer and see if they don't think the arrow is backwards. Secondly, remember that millions of English speakers who don't have a "full understanding English grammar and sentence semantics" communicate quite effectively, in English, every day. Natural languages work, even when they're poorly used and/or not fully understood by the speakers. That's why everybody uses them. Even you.
-
A reasonable point. However, not everyone writes code in the IDE. In fact, I'm continually surprised that anyone does. :)
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
Christopher Duncan wrote:
However, not everyone writes code in the IDE.
Specially when reading article code
Engaged in learning of English grammar ;)
For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.(John 3:16) :badger: -
I've noticed that the C# folks at Microsoft have promoted a different naming convention that uses no variable type prefix. At the same time, I've observed that it's now trendy for people to dislike Hungarian notation. When I first started Windows programming Hungarian was indeed strange to get used to. But then, so was the Windows API. However, these days when I look at variable names without it and am left to either guess or search through the code to determine what the variable type is, I find myself thinking that these variable names are only one step removed from the old Basic days of names such as A, B, etc. Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular? Is there technical reasoning behind it, or is it just a new generation who feels that they must do things differently than those who came before in order to proclaim their identity?
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
I hated Hungarian notation from day one. The one thing I was already doing was prefixing all pointers with a lowercase "p" since this distinction is critical in C and C++ programming. Again, I was already doing this! One of the first things I do when going through a template generated MFC project is to remove all the hungarian notation. Have been doing it for years. (I also remove all the dumb comments littering the files and fix the indenting.) In C++, one of the problems with Hungarian notation is that a type may change. I can't count the number of times the notation makes no sense; like a variable named nSomething being a bool, or szName being a CString or bCaptured being an int and so forth. (Yes, this could be fixed by search and replace, but had the variable been named plainly to begin with, it wouldn't matter.) (Besides, like the Wikipedia article pointed out, the original main idea was to convey the purpose of a variable with notation, not its type.)
Anyone who thinks he has a better idea of what's good for people than people do is a swine. - P.J. O'Rourke
-
zoid ! wrote:
Suppose later a requirement comes to make it a float. Now every occurance in the code needs to be changed to fTotal.
and how often do you do that without then going to every place you use the variable, to make sure you're not losing precision, generating overflows, truncating, etc ? or do you just change the type and hope for the best ? point is: changing the type is not a one-spot change. you should be revisiting all the code that uses that variable, which gives you a chance to change the name, while you're at it. and when you get right down to it, changing the name is going to make it very easy for you to find all those places, because the compiler is going to angrily point out each and every one of them for you.
image processing toolkits | batch image processing | blogging
You do it quite often while prototyping / designing a class. As someone pointed out earlier if you choose proper variable names you don't need to indicate their type. "userName" should never be a float. It is obvious that you need to go and inspect the code, but now in addition to inspection you need to perform modification as well.
-
Nemanja Trifunovic wrote:
Not really - it is not the drawer that performs the operation
Maybe :
Pen->PutInDrawer()
then :-)Regards, Nish
Nish’s thoughts on MFC, C++/CLI and .NET (my blog)
Currently working on C++/CLI in Action for Manning Publications. (*Sample chapter available online*)Nishant Sivakumar wrote:
Maybe : Pen->PutInDrawer() then
Hehehe, but the pen is not performing the operation either. I would go with something like:
Nish->Put(pen, drawer);
-
zoid ! wrote:
Suppose you code up your application using a variable "total" and decide to declare it as an int. So you have a variable called iTotal. Suppose later a requirement comes to make it a float. Now every occurance in the code needs to be changed to fTotal.
You're going to have to change all those references to a
float
anyway, so changing the name of the variable does not add any more work.
"Approved Workmen Are Not Ashamed" - 2 Timothy 2:15
"Judge not by the eye but by the heart." - Native American Proverb
really? There are many cases where it is a lot more work.. (Very contrived) example:
PerformSomeAlgorithm(int* piValue) { for(..) { piValue[i] = piValue[i-1]; if(piValue[i]>someConstant) piValue[i]*=2; : : } }
Now lets change the function to PerformSomeAlgorithm(float* pfValue)... -
I've noticed that the C# folks at Microsoft have promoted a different naming convention that uses no variable type prefix. At the same time, I've observed that it's now trendy for people to dislike Hungarian notation. When I first started Windows programming Hungarian was indeed strange to get used to. But then, so was the Windows API. However, these days when I look at variable names without it and am left to either guess or search through the code to determine what the variable type is, I find myself thinking that these variable names are only one step removed from the old Basic days of names such as A, B, etc. Why would a straightforward and easy to grasp system of conveying crucial information to the programmer at a glance suddenly become so unpopular? Is there technical reasoning behind it, or is it just a new generation who feels that they must do things differently than those who came before in order to proclaim their identity?
Author of The Career Programmer and Unite the Tribes www.PracticalStrategyConsulting.com
In some cases Hungarian notation is used yet to name controls of forms and webforms, like "textboxName" or "checkboxIsFlameWar".
Christopher Duncan wrote:
I find myself thinking that these variable names are only one step removed from the old Basic days of names such as A, B, etc.
It's not a common situation. Usually you can determine what the variable type is with intellisense or reading the context (not true for javascript, of course). By example
Engaged in learning of English grammar ;)
For God so loved the world, that he gave his only begotten Son, that whosoever believeth in him should not perish, but have everlasting life.(John 3:16) :badger: -
MS said Jump Left, they all jumped left. MS said Jump Right, they all jumped right.
image processing toolkits | batch image processing | blogging
I think we were all waiting to jump away from that awful Hungarian notation, anyways. Microsoft just gave us an excuse to ditch it for good.
Tech, life, family, faith: Give me a visit. I'm currently blogging about: Check out this cutie The apostle Paul, modernly speaking: Epistles of Paul Judah Himango