Hungarian Notation in .Net - Yes or No?
-
jhwurmbach wrote:
With any recent IDE, just holding the Cursor over the variable name does show the type. I cant see the point in prefixing rubbish to the name.
Try doing that when someone's pasted some code into a question in one of the programming forums. That's when hungarian is really useful.
In that case, the code snippet would be so short that the declaration would be immediatly above it. All in all - both your and James R. Twines argument seems a little far fetched to me.
Failure is not an option - it's built right in.
-
I was never into Hungarian Notation to begin with, but... Look at C# 3.0's "var" keyword. Examples here[^] about 1/2 down. I think the var keyword might be reason to bring back Hungarian Notation, because you have no idea what type the var is unless you look at the initializer. Now, from what I've heard from others, the var keyword is really just a convenience for not typing in the complete type (which I disapprove of), but I suspect there are better uses. Marc
The biggest reason I've seen for it has to do with generic methods, the best example being from the C++ STL. Every container class defines a list of typedefs for returned classes so you can create variable of those class types with just the container as a parameter. If the developer of the original class leaves off the typedef then you can't use the return type in a generic method.
class A
{
typedef number int;
int f() { return 1; }
}class B
{
typedef number double;
double f() { return 1; }
}void f(X x)
{
X::number = x.f();
}Instead of X::number var would be much more useful.
This blanket smells like ham
-
I recently posted a C# article and as a C++ programmer is wont to do, I used hungarian notation on all my variables. This has generated a bit of discussion in the comments for the article, and not wanting to be completely close-minded about it, I decided to google it. Here's what I've found so far: 0) It seems that Microsoft thinks we should abandon it in favor of "more natural english-like" variable names. The best response to that statement was this little gem: "If Microsoft said I shouldn't comment my code, it wouldn't stop me from doing that, either." I could not have said it better myself. Maybe this outlook by Microsoft is why Vista is such garbage, or why the ORCAS Beta 2 is so transiently reliable. Just because some self-important evangelist from Microsoft says it doesn't make it gold. Translation - this claim is pretty weak. This is from Microsoft's coding guidelines:
Use names that describe a parameter's meaning rather than names that describe a parameter's type. Development tools should provide meaningful information about a parameter's type. Therefore, a parameter's name can be put to better use by describing meaning. Use type-based parameter names sparingly and only where it is appropriate.
It looks to me like they're putting the emphasis on reading code squarely on the end user instead of the developer. Hello!? We're programmers, and we can't be bothered by trying to figure out what type a variable is supposed to be. Sure, code should be easy to read, but that trait is introduced with meaningful variable and function names, not by removing ancillary information about the variables being used. 1) If you change the variable's type, it all of a sudden invalidates the name of the variable. Ever heard of Find/replace (with case matching and whole word turned on)? Besides, I can count on one hand how many times I changed the type of a variable in the last 18 years of C++ work. 2) Puts an emphasis on the type instead of the descriptive identifier name—encourages poor variable names. Ummm, how can a single lowercase character move the emphasis from the following variable name to the type itself. Further, hungarian notation in no way promotes the creation of "poor variable names". I can't recall ever hearing a programmer say, "Yep, using hungarian notation so that means I can skimp on the rest of the variable name." There are other equally invalid reasons put forth by all manner of know-it-alls, but I got bored typing this stuff. ------I don't mind Hungarian notation (the false kind that's practiced now, or the original kind[^]), which, ironically for you, was invented by a Microsoft programmer. Personally, once I started coding in C#, I abandoned all Hungarian notation and found it quite liberating. It now seems superfluous to put a character at the beginning of each variable just to define the type. Simply put, I've never once thought to myself, "gee, is it a boolean? Or a string?"
Tech, life, family, faith: Give me a visit. I'm currently blogging about: Orthodox Jews are persecuting Messianic Jews in Israel (video) The apostle Paul, modernly speaking: Epistles of Paul Judah Himango
-
In that case, the code snippet would be so short that the declaration would be immediatly above it. All in all - both your and James R. Twines argument seems a little far fetched to me.
Failure is not an option - it's built right in.
What? Are you new here? When did you last see someone asking questions in one of the programming forums who bothered to include declarations in their code snippets? My point is that yes, Hungarian notation is pretty unnecessary in an IDE. That's not the only place you ever see code, however.
-
I've been at my Job(1st since graduating college), for about a year now, and I dont know what I'd do without the notation. I'm mostly dealing with code that has somewhere along the lines of 10000 line functions. Having to scroll up to check types for variable types only used once would be beyond a pain in the a**. So hurray for Hungarian Notation for saving me precious time and facilitating my laziness to scroll up.
[Insert Witty Sig Here]
VonHagNDaz wrote:
has somewhere along the lines of 10000 line functions
Don't plan around abuses. Hungarian notation is superfluous in any sane-sized method.
Tech, life, family, faith: Give me a visit. I'm currently blogging about: Orthodox Jews are persecuting Messianic Jews in Israel (video) The apostle Paul, modernly speaking: Epistles of Paul Judah Himango
-
I think you just hit the issue on it's head. If hungarian has value: use it If not: don't. 10,000 line methods (just breath easy and relax, just breath easy and relax, jus ..) gives hungarian notation value whereas for 20 line methods it provides none since the variable declarations are still visible. Add to that the lack of pointer indirection in .net (one of the more egregious uses of notation in c++ it seems), an advanced IDE and the overwhelming diaspora of objects, hungarian notation seems to have little application. Still, I have worked in places where it was not just used but patently abused. I should really post a wtf on that memory :)
I'm largely language agnostic
After a while they all bug me :doh:
MidwestLimey wrote:
If hungarian has value: use it
I remember a blog or article on Hungarian notation in which the author decried the common use of it to denote primitive types, in contrast to its inventors' original intention of it indicating type and purpose, so proper use is (excuse my C/C++) lpcStr or something like like. I still use the concept in UI, but to break with my grimy VB6 past, I now prefix TextBox control names with 'text', e.g. textSurname, to go on, optionYes and optionNo etc.
-
What? Are you new here? When did you last see someone asking questions in one of the programming forums who bothered to include declarations in their code snippets? My point is that yes, Hungarian notation is pretty unnecessary in an IDE. That's not the only place you ever see code, however.
Craster wrote:
What? Are you new here?
Sortof - only 1.2KPosts. Oh my God!:omg: *runs of to the "Anonymous CodeProjecters" meeting* The kind of people who poste incorrect codesnippets to the forum are those that would not even know there is something like hungarian notation. Much less using it correctly. So in that case, I think the whole procedure is of little value.
Failure is not an option - it's built right in.
-
I was never into Hungarian Notation to begin with, but... Look at C# 3.0's "var" keyword. Examples here[^] about 1/2 down. I think the var keyword might be reason to bring back Hungarian Notation, because you have no idea what type the var is unless you look at the initializer. Now, from what I've heard from others, the var keyword is really just a convenience for not typing in the complete type (which I disapprove of), but I suspect there are better uses. Marc
Marc Clifton wrote:
I think the var keyword might be reason to bring back Hungarian Notation, because you have no idea what type the var is unless you look at the initializer.
Why use var if you know the type so quickly?
-
:) I too went to .NET from a Win32/MFC background, and found it strange - userName instead of strUserName felt awkward at first, but now it's the other way round. Besides, if you were to continue using Hungarian, your variable names would clash savagely with the framework's names (and others' ;) ). I'll tell you what I do not like about the .NET naming conventions, though - I think camelCase is way better for method names. [ducks]
John Simmons / outlaw programmer wrote:
the ORCAS Beta 2 is so transiently reliable
I'd never install MS (or for that matter, most) betas. Unless the client wanted something to be oh-so-ready for the future.
Cheers, Vıkram.
After all is said and done, much is said and little is done.
Vikram A Punathambekar wrote:
I'll tell you what I do not like about the .NET naming conventions, though - I think camelCase is way better for method names. [ducks]
You better duck! :suss:
-
I recently posted a C# article and as a C++ programmer is wont to do, I used hungarian notation on all my variables. This has generated a bit of discussion in the comments for the article, and not wanting to be completely close-minded about it, I decided to google it. Here's what I've found so far: 0) It seems that Microsoft thinks we should abandon it in favor of "more natural english-like" variable names. The best response to that statement was this little gem: "If Microsoft said I shouldn't comment my code, it wouldn't stop me from doing that, either." I could not have said it better myself. Maybe this outlook by Microsoft is why Vista is such garbage, or why the ORCAS Beta 2 is so transiently reliable. Just because some self-important evangelist from Microsoft says it doesn't make it gold. Translation - this claim is pretty weak. This is from Microsoft's coding guidelines:
Use names that describe a parameter's meaning rather than names that describe a parameter's type. Development tools should provide meaningful information about a parameter's type. Therefore, a parameter's name can be put to better use by describing meaning. Use type-based parameter names sparingly and only where it is appropriate.
It looks to me like they're putting the emphasis on reading code squarely on the end user instead of the developer. Hello!? We're programmers, and we can't be bothered by trying to figure out what type a variable is supposed to be. Sure, code should be easy to read, but that trait is introduced with meaningful variable and function names, not by removing ancillary information about the variables being used. 1) If you change the variable's type, it all of a sudden invalidates the name of the variable. Ever heard of Find/replace (with case matching and whole word turned on)? Besides, I can count on one hand how many times I changed the type of a variable in the last 18 years of C++ work. 2) Puts an emphasis on the type instead of the descriptive identifier name—encourages poor variable names. Ummm, how can a single lowercase character move the emphasis from the following variable name to the type itself. Further, hungarian notation in no way promotes the creation of "poor variable names". I can't recall ever hearing a programmer say, "Yep, using hungarian notation so that means I can skimp on the rest of the variable name." There are other equally invalid reasons put forth by all manner of know-it-alls, but I got bored typing this stuff. ------Here's a blog post from one of the early C# team members. http://blogs.msdn.com/ericgu/archive/2007/02/01/properties-vs-public-fields-redux.aspx[^] If you are writing a library that will get used by a number of different languages, all the hungarian notation is ugly and confusing. If your code isn't going to be public for some level of public, just do what makes everyone happy.
This blanket smells like ham
-
IMHO, it is sad that people waste so much energy on trivialities. Hungarian or not - who cares, as long as you use it consistently. The use of brain-damaged .NET 1.x collections is a much graver sin than any style issue one could imagine.
Nemanja Trifunovic wrote:
brain-damaged .NET 1.x collections
What do you mean by that? The non-generic ones?
-
John Simmons / outlaw programmer wrote:
more natural english-like
Looks like the Negus has infiltrated Microsoft.
Gary Kirkham Forever Forgiven and Alive in the Spirit He is no fool who gives what he cannot keep to gain what he cannot lose. - Jim Elliot Me blog, You read
That's where he's been, under cover!
-
In that case, the code snippet would be so short that the declaration would be immediatly above it. All in all - both your and James R. Twines argument seems a little far fetched to me.
Failure is not an option - it's built right in.
You have to consider that different people work in different situations, and have different experiences. Some people go through their lives never needing a gun, some have never had to wade through 100KB+ of source code (single file!) trying to resolve merge conflicts, and some have never had to code using VI over a 1200 baud serial terminal server connection. Remember - just because you cannot see a reason for something does not mean that others may not know different (or better). Tools are great, do not get me wrong, but they do not take the place of well-written code. Done correctly, I believe that notations (prefixing or just "good naming") goes a long way here. Peace!
-=- James
Please rate this message - let me know if I helped or not! * * *
If you think it costs a lot to do it right, just wait until you find out how much it costs to do it wrong!
Avoid driving a vehicle taller than you and remember that Professional Driver on Closed Course does not mean your Dumb Ass on a Public Road!
See DeleteFXPFiles -
I recently posted a C# article and as a C++ programmer is wont to do, I used hungarian notation on all my variables. This has generated a bit of discussion in the comments for the article, and not wanting to be completely close-minded about it, I decided to google it. Here's what I've found so far: 0) It seems that Microsoft thinks we should abandon it in favor of "more natural english-like" variable names. The best response to that statement was this little gem: "If Microsoft said I shouldn't comment my code, it wouldn't stop me from doing that, either." I could not have said it better myself. Maybe this outlook by Microsoft is why Vista is such garbage, or why the ORCAS Beta 2 is so transiently reliable. Just because some self-important evangelist from Microsoft says it doesn't make it gold. Translation - this claim is pretty weak. This is from Microsoft's coding guidelines:
Use names that describe a parameter's meaning rather than names that describe a parameter's type. Development tools should provide meaningful information about a parameter's type. Therefore, a parameter's name can be put to better use by describing meaning. Use type-based parameter names sparingly and only where it is appropriate.
It looks to me like they're putting the emphasis on reading code squarely on the end user instead of the developer. Hello!? We're programmers, and we can't be bothered by trying to figure out what type a variable is supposed to be. Sure, code should be easy to read, but that trait is introduced with meaningful variable and function names, not by removing ancillary information about the variables being used. 1) If you change the variable's type, it all of a sudden invalidates the name of the variable. Ever heard of Find/replace (with case matching and whole word turned on)? Besides, I can count on one hand how many times I changed the type of a variable in the last 18 years of C++ work. 2) Puts an emphasis on the type instead of the descriptive identifier name—encourages poor variable names. Ummm, how can a single lowercase character move the emphasis from the following variable name to the type itself. Further, hungarian notation in no way promotes the creation of "poor variable names". I can't recall ever hearing a programmer say, "Yep, using hungarian notation so that means I can skimp on the rest of the variable name." There are other equally invalid reasons put forth by all manner of know-it-alls, but I got bored typing this stuff. ------I hate hungarian notation. Hate, hate, hate, hate... The fact that seven years of coding on a team where it was The Standard have made it almost a habit for me just makes me hate it even more. The original idea was ok: tag variables with codes that indicate what sort of data it will be used for - not a basic compiler-defined type, something that actually makes sense in the app. But that was decades ago, and in the meantime we gained compilers that deal nicely with concise functions, and editors that can make code pretty and easy to read. And so it hangs around, extra baggage for those skilled enough to actually use it right, and yet another opportunity for the lazy to trip us up by throwing any old prefix onto variables. With .NET, Microsoft finally told the Hungarian to hit the road, hired some real programmers, and ditched the madness. It is, quite possibly, my second-favorite .NET feature.
You must be careful in the forest Broken glass and rusty nails If you're to bring back something for us I have bullets for sale...
-
It takes a bit of getting used to but I far prefer non-hungarian notation now. I thought there would be problems with getting mixed up with types but by using sensible names (eg startDate, memberId, typeName) it's obvious as to the vague type you would expect to find in the variable
cheers, Chris Maunder
CodeProject.com : C++ MVP
Chris Maunder wrote:
hungarian notation
Now that's a notation I have not used in a long, long, long time :)
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
-
I recently posted a C# article and as a C++ programmer is wont to do, I used hungarian notation on all my variables. This has generated a bit of discussion in the comments for the article, and not wanting to be completely close-minded about it, I decided to google it. Here's what I've found so far: 0) It seems that Microsoft thinks we should abandon it in favor of "more natural english-like" variable names. The best response to that statement was this little gem: "If Microsoft said I shouldn't comment my code, it wouldn't stop me from doing that, either." I could not have said it better myself. Maybe this outlook by Microsoft is why Vista is such garbage, or why the ORCAS Beta 2 is so transiently reliable. Just because some self-important evangelist from Microsoft says it doesn't make it gold. Translation - this claim is pretty weak. This is from Microsoft's coding guidelines:
Use names that describe a parameter's meaning rather than names that describe a parameter's type. Development tools should provide meaningful information about a parameter's type. Therefore, a parameter's name can be put to better use by describing meaning. Use type-based parameter names sparingly and only where it is appropriate.
It looks to me like they're putting the emphasis on reading code squarely on the end user instead of the developer. Hello!? We're programmers, and we can't be bothered by trying to figure out what type a variable is supposed to be. Sure, code should be easy to read, but that trait is introduced with meaningful variable and function names, not by removing ancillary information about the variables being used. 1) If you change the variable's type, it all of a sudden invalidates the name of the variable. Ever heard of Find/replace (with case matching and whole word turned on)? Besides, I can count on one hand how many times I changed the type of a variable in the last 18 years of C++ work. 2) Puts an emphasis on the type instead of the descriptive identifier name—encourages poor variable names. Ummm, how can a single lowercase character move the emphasis from the following variable name to the type itself. Further, hungarian notation in no way promotes the creation of "poor variable names". I can't recall ever hearing a programmer say, "Yep, using hungarian notation so that means I can skimp on the rest of the variable name." There are other equally invalid reasons put forth by all manner of know-it-alls, but I got bored typing this stuff. ------I just stick to whatever I feel like.
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
-
Chris Maunder wrote:
memberId
I use ID instead of Id. FxCop bugs the hell out of me on that one.
Cheers, Vıkram.
After all is said and done, much is said and little is done.
I used to use
ID
but because of FxCop, I changed my habit toId
now.Luis Alonso Ramos Intelectix Chihuahua, Mexico
-
yay for camelCase!
I'm largely language agnostic
After a while they all bug me :doh:
Whereabouts in the Midwest are you, Limey? I've been a Limey in Central California for almost 9 years now...
Sunrise Wallpaper Project | The StartPage Randomizer | A Random Web Page
-
I recently posted a C# article and as a C++ programmer is wont to do, I used hungarian notation on all my variables. This has generated a bit of discussion in the comments for the article, and not wanting to be completely close-minded about it, I decided to google it. Here's what I've found so far: 0) It seems that Microsoft thinks we should abandon it in favor of "more natural english-like" variable names. The best response to that statement was this little gem: "If Microsoft said I shouldn't comment my code, it wouldn't stop me from doing that, either." I could not have said it better myself. Maybe this outlook by Microsoft is why Vista is such garbage, or why the ORCAS Beta 2 is so transiently reliable. Just because some self-important evangelist from Microsoft says it doesn't make it gold. Translation - this claim is pretty weak. This is from Microsoft's coding guidelines:
Use names that describe a parameter's meaning rather than names that describe a parameter's type. Development tools should provide meaningful information about a parameter's type. Therefore, a parameter's name can be put to better use by describing meaning. Use type-based parameter names sparingly and only where it is appropriate.
It looks to me like they're putting the emphasis on reading code squarely on the end user instead of the developer. Hello!? We're programmers, and we can't be bothered by trying to figure out what type a variable is supposed to be. Sure, code should be easy to read, but that trait is introduced with meaningful variable and function names, not by removing ancillary information about the variables being used. 1) If you change the variable's type, it all of a sudden invalidates the name of the variable. Ever heard of Find/replace (with case matching and whole word turned on)? Besides, I can count on one hand how many times I changed the type of a variable in the last 18 years of C++ work. 2) Puts an emphasis on the type instead of the descriptive identifier name—encourages poor variable names. Ummm, how can a single lowercase character move the emphasis from the following variable name to the type itself. Further, hungarian notation in no way promotes the creation of "poor variable names". I can't recall ever hearing a programmer say, "Yep, using hungarian notation so that means I can skimp on the rest of the variable name." There are other equally invalid reasons put forth by all manner of know-it-alls, but I got bored typing this stuff. ------I might be in minority but I don't really care. If you are using VS2005 Hungarian notation adds little value. Just put the cursor over the object and the type is displayed. Honestly I'm not consistent with my naming. I might have a text box name txtBox1, txtName, txt or just "x" (if I'm using it in a loop). Since I do ASP.Net I don't have to many 10,000 line methods because all the code has to run during a postback. I'm the same way with my database fieldnames, I just don't care. You may call it laziness but I think too many programmers an Anal about this stuff. Like I said before, if I want to know what type something is, I just put the cursor on it.
I didn't get any requirements for the signature
-
I recently posted a C# article and as a C++ programmer is wont to do, I used hungarian notation on all my variables. This has generated a bit of discussion in the comments for the article, and not wanting to be completely close-minded about it, I decided to google it. Here's what I've found so far: 0) It seems that Microsoft thinks we should abandon it in favor of "more natural english-like" variable names. The best response to that statement was this little gem: "If Microsoft said I shouldn't comment my code, it wouldn't stop me from doing that, either." I could not have said it better myself. Maybe this outlook by Microsoft is why Vista is such garbage, or why the ORCAS Beta 2 is so transiently reliable. Just because some self-important evangelist from Microsoft says it doesn't make it gold. Translation - this claim is pretty weak. This is from Microsoft's coding guidelines:
Use names that describe a parameter's meaning rather than names that describe a parameter's type. Development tools should provide meaningful information about a parameter's type. Therefore, a parameter's name can be put to better use by describing meaning. Use type-based parameter names sparingly and only where it is appropriate.
It looks to me like they're putting the emphasis on reading code squarely on the end user instead of the developer. Hello!? We're programmers, and we can't be bothered by trying to figure out what type a variable is supposed to be. Sure, code should be easy to read, but that trait is introduced with meaningful variable and function names, not by removing ancillary information about the variables being used. 1) If you change the variable's type, it all of a sudden invalidates the name of the variable. Ever heard of Find/replace (with case matching and whole word turned on)? Besides, I can count on one hand how many times I changed the type of a variable in the last 18 years of C++ work. 2) Puts an emphasis on the type instead of the descriptive identifier name—encourages poor variable names. Ummm, how can a single lowercase character move the emphasis from the following variable name to the type itself. Further, hungarian notation in no way promotes the creation of "poor variable names". I can't recall ever hearing a programmer say, "Yep, using hungarian notation so that means I can skimp on the rest of the variable name." There are other equally invalid reasons put forth by all manner of know-it-alls, but I got bored typing this stuff. ------As others have pointed, I try to do as the framework/language does. For C++ (which I rarely do these days), I use hungarian notation. For C# I try to use the .NET notation (to be consistent with the framework). I've got used to it, and it works fine for me. I remember when I started with C#, I used hungarian notation. But it just didn't feel right mixing it with .NET's variable names. I think that's what made me change. And with well-chosen variable names, the type can be deducted from the name. The conclusion here, is do it however you want to do it, but just try to be consistent (worse than any standard is no standard).
Luis Alonso Ramos Intelectix Chihuahua, Mexico