Why the world hates Hungarian notation?
-
Walt Fair, Jr. wrote:
When someone calls and I ask for their Account Number, there's never any confusion.
In that case, since everyone knows it's a string, why would you want to prefix it with s or str or lpsz?
Regards, Nish
My technology blog: voidnish.wordpress.com
Because you and the devs you work with didn't know and might need to review the code? :laugh: ;P
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
-
Because you and the devs you work with didn't know and might need to review the code? :laugh: ;P
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
Walt Fair, Jr. wrote:
Because you and the devs you work with didn't know and might need to review the code? :laugh:;-P
You forget that you are talking to a super programmer here. ;P
Regards, Nish
My technology blog: voidnish.wordpress.com
-
Walt Fair, Jr. wrote:
Because you and the devs you work with didn't know and might need to review the code? :laugh:;-P
You forget that you are talking to a super programmer here. ;P
Regards, Nish
My technology blog: voidnish.wordpress.com
:cool:
CQ de W5ALT
Walt Fair, Jr., P. E. Comport Computing Specializing in Technical Engineering Software
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
As pointed out above, hungarian notation was meant to signify the purpose of a type, not it's underlying type. The only notation I use is "p" for pointer, "m" for member data and "h" for handle since they indicate a fundamental difference in how they are used. My biggest problem with Hungarian notation are: 1) The abbreviations frequently makes no sense. 2) If code is ported or variables changed, the notation is often wrong. In the current code I'm working on, it's common to find sz in front of pointers, dw in front of long longs. is "b" a bool, BOOL or int? I just looked at some code with an 'h' prefix, but the original programmer had turned it into a pointer (he argues that it's a pseudo-handle--great, then encapsulate it.) 3) In connection with the above, if a data type becomes an abstract type or is extended, it can cause serious confusion. (I've see way too much code where something like bCanReplace actually holds an enumerated value. (Last year, I ported a bunch of code to Unicode in preparation for a CE port. I removed what Hungarian that I could, but the code is littered with "sz" prefixes, among other things--is the sz an ANSI or Unicode string?) 4) It is often accompanied with short variable names. Once you make variable names longer, the meaning becomes clear and the Hungarian notation becomes redundant (and, going with the above, often misleading.) Any developer that insists on using Hungarian should be forced to write a style guide on their personal notation. (On par with my hatred of Hungarian is excessive typedef'ing. Some it nice, even necessary, but I've seen code where they redefine EVERY type for no reason.)
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
- Hungarian notation does not work well with generics or templates or code refactoring (changing an
int
for along
for example). 2) With modern editors, you will know the type simply by a mouse over a variable. 3) With properly written code (small functions, single responsibility classes...), it should be easy to understand the code anyway. 4) You should avoid cast and the compiler will show most incompatibilities anyway. And if you useint
consistently when it make sense, type information won't be that much useful. 5) After having stop to use that notation for more than 5 years, I can say it is much better that way. By using sensible variable and function names, it is generally relatively easy to figure out what the code does. 6) If you are using primitive data types 90% of the time, then you are probably reinventing the wheel and you classes probably have way too much responsibilities. Either with BCL in .NET or STL (or boost) in C++, you should be able to use a lot of existing stuff. 7) If you want to review code, then you should also print the header.
Philippe Mori
- Hungarian notation does not work well with generics or templates or code refactoring (changing an
-
I worked on a project where one of the data types changed. Then you have to go through the code and change the notation everywhere that variable is used. Pretty easy with search and replace, but doesn't need to be done if you just use proper naming.
That also has a pretty negative impact on source code control, resulting in changes across multiple files that would otherwise be unnecessary.
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
1. It's pug-ugly. 2. It encodes the data type directly in the variable name, if the variable's type changes, all references must be updated. 3. If you have a 300 line code block with no variable declarations, I suspect you have worse problems than use of Hungarian notation. 4. Most people who claim to use HN actually use the HN exemplified by MS in the 1990's. This is mis-use. Original usage used, for example, "i" for index, not for "integer". The prefix was meant to give the usage, not the data type. 5. Insistence of HN in your coding conventions is likely to scare off talented developers. Nowadays, with long variable names, I (along with most other developers) prefer to give variables descriptive, readable names rather than obscure names relying on "conventions" which change with the code-base.
-
I'm finding it difficult to review the code in notepad. The purpose of the review is to shoot down all the unreasonable use of static variables. But unless I copy the entire project and get the roots of all these variables, I wont be able to step into the code review. it's bad.
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
If your variable declaration's are over 300 lines away from their usage, you've probably got much worse problems than naming conventions.
-
I'm finding it difficult to review the code in notepad. The purpose of the review is to shoot down all the unreasonable use of static variables. But unless I copy the entire project and get the roots of all these variables, I wont be able to step into the code review. it's bad.
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
VuNic wrote:
in notepad
Yep, there's your problem. Coding in text editors is what we did in the early eighties. Since the concept of an IDE came, I never looked back. But... I never used Hungarian notation back then either, as the C compiler would tell me of any (most - pointers are a bit troublesome) mistakes with variable types. Conceptual Hungarian Notation (App Hungarian, as linked to in an answer above), however, I've done at times. But I tend to write longer variable names instead, nowadays, such as: double RelativePosition; double AbsolutePosition; ...or something similar. Opening my code in Notepad? Not likely, except in VERY extraordinary situations. I tend to see my source code as binary files that can only be interpreted by the IDE. If I don't have IntelliSense or don't get red squigglies under type mixups, I'm pretty much handicapped. And seriously, checking a project out from the CVS takes seconds these days (even from home). If not, the project is overdue for being split into smaller subprojects. Also, if I don't have the IDE, I can always hop onto a terminal server at work that has. No problem. Hava a nice weekend!
-
As an aside, the .NET world is vehemently opposed to Hungarian notation yet the prefix _ is ok. WTF. I would rather use m than _ any day of the week. At least m stands for something. Of course, the reason the _ is even necessary is because of VB.NET.
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. I also do Android Programming as I find it a refreshing break from the MS. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost
Exactly what does m mean? I've seen it for years and never understood the point - I thought it was just some kind of typing mistake that spread? For private members (oh - maybe it means member - but no, people never use it on public members, so it can't be that) I used to use the "p" prefix... but ended up with "_" instead (after a short period of camel casing - but it ended up being too hard to spot unintentional recursions then - and also, camel casing is reserved for parameters). But... back to the original question. What does the "m" or "m_" prefix on private fields in a class really mean? Ten years since I first saw it (and barfed on it), still no clue... :)
-
Actually I intentionally put that as Notepad though I would be using an IDE for the code review. Because to me both IDE and notepad would mean the same unless I copy the entire project for the review.
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
Just a thought: Maybe the project is too big then? And by project I mean (in the VS sense) project, not solution. Even if you have a huge solution, it's usually sufficient to check out a project - since it will include the output (DLL:s, for example) from the other projects it needs. That way, you get full intellisense, mouseover type information and so on (for instance, you can view metadata and declarations of classes from the other projects) even without checking out the entire solution. Viable?
-
VuNic wrote:
in notepad
Yep, there's your problem. Coding in text editors is what we did in the early eighties. Since the concept of an IDE came, I never looked back. But... I never used Hungarian notation back then either, as the C compiler would tell me of any (most - pointers are a bit troublesome) mistakes with variable types. Conceptual Hungarian Notation (App Hungarian, as linked to in an answer above), however, I've done at times. But I tend to write longer variable names instead, nowadays, such as: double RelativePosition; double AbsolutePosition; ...or something similar. Opening my code in Notepad? Not likely, except in VERY extraordinary situations. I tend to see my source code as binary files that can only be interpreted by the IDE. If I don't have IntelliSense or don't get red squigglies under type mixups, I'm pretty much handicapped. And seriously, checking a project out from the CVS takes seconds these days (even from home). If not, the project is overdue for being split into smaller subprojects. Also, if I don't have the IDE, I can always hop onto a terminal server at work that has. No problem. Hava a nice weekend!
I work for a company (which shall remain nameless) which has locked down PCs where downloading software is a disciplinary offence (even if you could break the security model to do it). Therefore, I do not have Visual Studio and Notepad (on Windows) and vi (on Unix) are my daily tools. And as for using a Terminal Server from home to access a work's machine; that would break national security rules. So, enjoy your insecure environment - you are one of the lucky ones; at least until you accidentally download a product that bypasses your malware alerts.
-
I work for a company (which shall remain nameless) which has locked down PCs where downloading software is a disciplinary offence (even if you could break the security model to do it). Therefore, I do not have Visual Studio and Notepad (on Windows) and vi (on Unix) are my daily tools. And as for using a Terminal Server from home to access a work's machine; that would break national security rules. So, enjoy your insecure environment - you are one of the lucky ones; at least until you accidentally download a product that bypasses your malware alerts.
Haha, That's why I like working for a small company... :) But seriously, our environment isn't really that insecure. I don't download any software (can't even see that I mentioned downloading software) that isn't licensed or free and approved (although I do the approving here), and I don't run any of the tools on any computer that isn't owned by the company (usually, my work laptop). Termial services access is over SSL and we have a reasonably secure password policy. It doesn't violate any security policy at all (since I write the security policies, I know). But... I mostly write customer specific back-end software for web sites - code that is useful in one and exactly one place... so if someone WOULD get at it, it wouldn't be much of a problem anyway. But I can understand your situation. Was a while ago since I used "vi"... almost 30 years now... some things never die... :)
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
Agreed to the fact that, Hungarian notation helps us to identify the variable type.. But in the modern world of OO programming where we mostly now-a-days with custom classes and entities. You will mostly be using Custom types for your variable. In that case its seen people will use objFirstCustomer where obj will stand common for all of the custom types... which doesn't show what data type it is. Or you'd better name its as curManFirstCustomer where the same curMan doesn't even will denote what type of object is it. So thus its completely meaningless to use hungrily notation... where its doesn't serve the purpose fully... Also normal .Net notation keep your code neat, presentable and shows more uniformity...
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
A more interesting question is, why do you need to know the type in the first place? My take against it is code without prefixes are more readable. If the variable name tells me nothing, then the name is the problem, not the type or lack of prefix. Another thing is I find it impractical. It's easy to find prefixes for primitive types, but what about custom types? Should we invent prefixes for all of them? What if we do, doesn't
accAccount
look pretty stupid? -
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
Well, I am a firm believer in Hungarian and here is why - I believe that the name of a variable should instantly convey three vital pieces of information to the dev: - it's type and I do mean type as in compiler typeof not semantics (regardless of what Charles did or did not intend) - it's scope - is it local, a parameter, is it a passed in reference, a class level variable - a clear indication of it's function or purpose Different devs have their own view on how to write code; however, I believe that it's better to have a good logical reason for a belief rather than just a prejudice. I believe that one of the reasons that Hungarian is considered bad is due to the way that many devs implement it - that is, each group uses very short abbreviations that are not clear to someone who is not part of the team so when you look at another teams code, you have to learn their abbreviations. Different teams have different rules and different ways of implementing Hungarian which make it a hassle to read another teams code. I believe that it is not Hungarian that is the problem but rather the way that it is implemented that is the problem. First we do very little abbreviation and only if it is an abbreviation that is so common that a noob would clearly understand it. We use longer names - yes we believe in self documenting code which means my manager should be able to read my code and understand what is being implemented in general terms (we don't however go to extremes). If a passed in parameter is a reference that will live after a function is finished, then it is important to know that altering it can have long term consequences. If a variable is a class level variable, it is extremely important to know. It should also be obvious that the more accurate the variable name clearly indicates it's purpose and function, the less potential there is for another dev to misunderstand or the original dev when returning to old code. Saying that Intellisense will solve the problem or that we can mouse over a variable is not a solution because when we read code and think we understand, we rarely mouse over or use Intellisense on what we think we already understand. Saying that Hungarian is bad because the type might change is a really weak argument - in the first place, types don't usually change that much; second, if a type does change, I certainly want to know; and lastly, with today's IDE refactoring capability, is really that hard to change a variable name (it's certainly easy for me). We c
-
A more interesting question is, why do you need to know the type in the first place? My take against it is code without prefixes are more readable. If the variable name tells me nothing, then the name is the problem, not the type or lack of prefix. Another thing is I find it impractical. It's easy to find prefixes for primitive types, but what about custom types? Should we invent prefixes for all of them? What if we do, doesn't
accAccount
look pretty stupid? -
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
I don't care for Hungarian notation because it doesn't add any useful information. Consider the following two variable names: CustomerName strCustomerName There is no added value to the 'str' prefix in that example because both variables are obviously string variables. All my variable names have a tendency to make their type obvious - there is no reason to use short, confusing names where a particular notation is required. The tool tip will tell one the type anyways.
-
Agreed to the fact that, Hungarian notation helps us to identify the variable type.. But in the modern world of OO programming where we mostly now-a-days with custom classes and entities. You will mostly be using Custom types for your variable. In that case its seen people will use objFirstCustomer where obj will stand common for all of the custom types... which doesn't show what data type it is. Or you'd better name its as curManFirstCustomer where the same curMan doesn't even will denote what type of object is it. So thus its completely meaningless to use hungrily notation... where its doesn't serve the purpose fully... Also normal .Net notation keep your code neat, presentable and shows more uniformity...
-
1. It's pug-ugly. 2. It encodes the data type directly in the variable name, if the variable's type changes, all references must be updated. 3. If you have a 300 line code block with no variable declarations, I suspect you have worse problems than use of Hungarian notation. 4. Most people who claim to use HN actually use the HN exemplified by MS in the 1990's. This is mis-use. Original usage used, for example, "i" for index, not for "integer". The prefix was meant to give the usage, not the data type. 5. Insistence of HN in your coding conventions is likely to scare off talented developers. Nowadays, with long variable names, I (along with most other developers) prefer to give variables descriptive, readable names rather than obscure names relying on "conventions" which change with the code-base.