Why the world hates Hungarian notation?
-
A more interesting question is, why do you need to know the type in the first place? My take against it is code without prefixes are more readable. If the variable name tells me nothing, then the name is the problem, not the type or lack of prefix. Another thing is I find it impractical. It's easy to find prefixes for primitive types, but what about custom types? Should we invent prefixes for all of them? What if we do, doesn't
accAccount
look pretty stupid? -
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
I don't care for Hungarian notation because it doesn't add any useful information. Consider the following two variable names: CustomerName strCustomerName There is no added value to the 'str' prefix in that example because both variables are obviously string variables. All my variable names have a tendency to make their type obvious - there is no reason to use short, confusing names where a particular notation is required. The tool tip will tell one the type anyways.
-
Agreed to the fact that, Hungarian notation helps us to identify the variable type.. But in the modern world of OO programming where we mostly now-a-days with custom classes and entities. You will mostly be using Custom types for your variable. In that case its seen people will use objFirstCustomer where obj will stand common for all of the custom types... which doesn't show what data type it is. Or you'd better name its as curManFirstCustomer where the same curMan doesn't even will denote what type of object is it. So thus its completely meaningless to use hungrily notation... where its doesn't serve the purpose fully... Also normal .Net notation keep your code neat, presentable and shows more uniformity...
-
1. It's pug-ugly. 2. It encodes the data type directly in the variable name, if the variable's type changes, all references must be updated. 3. If you have a 300 line code block with no variable declarations, I suspect you have worse problems than use of Hungarian notation. 4. Most people who claim to use HN actually use the HN exemplified by MS in the 1990's. This is mis-use. Original usage used, for example, "i" for index, not for "integer". The prefix was meant to give the usage, not the data type. 5. Insistence of HN in your coding conventions is likely to scare off talented developers. Nowadays, with long variable names, I (along with most other developers) prefer to give variables descriptive, readable names rather than obscure names relying on "conventions" which change with the code-base.
-
DavidCrow wrote:
What type is it: numeric (long, int, double) or alphanumeric?
If I see
accountNumber
, I'd automatically assume anInt32
(.NET).Regards, Nish
My technology blog: voidnish.wordpress.com
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
Hungarian Notation expresses data type, but it does nothing for, and can even obscure, data semantics. I'm sure that was no part of Charles Simonyi's intent, but it does seem to work out that way quite often.
The old guideline that one should keep one's procedures / functions / methods short enough to be completely viewed on the monitor screen tends to produce the best legibility. If you can see everything at once, it's more difficult to go astray about either type or semantics.
My personal practice is to keep procedures as short as possible -- it's been a long time since I last wrote a procedure that can't be displayed in its entirety on the screen -- and to adhere to an inside / outside convention regarding variable names:
- Variables declared inside the procedure will have short names, with the exception of static variables whose significance extends beyond individual invocations of the procedure.
- Variables declared outside all procedures will have long, maximally descriptive names, since this is the space in which most problems of coupling and timing arise.
- Of course, those "outside-all-procedures" variables will be minimized in number, and protected from thread collisions with mutexes as appropriate.
Now, I'm a real-timer; my applications are always heavily multi-threaded, and I'm always intensely concerned with attaining a reliably predictable response time to any imaginable event. If you do other sorts of programs, you're likely to have different desiderata...but I can't imagine that the conventions described above would harm you, even so.
(This message is programming you in ways you cannot detect. Be afraid.)
-
Exactly what does m mean? I've seen it for years and never understood the point - I thought it was just some kind of typing mistake that spread? For private members (oh - maybe it means member - but no, people never use it on public members, so it can't be that) I used to use the "p" prefix... but ended up with "_" instead (after a short period of camel casing - but it ended up being too hard to spot unintentional recursions then - and also, camel casing is reserved for parameters). But... back to the original question. What does the "m" or "m_" prefix on private fields in a class really mean? Ten years since I first saw it (and barfed on it), still no clue... :)
You are not supposed to have "public" members. You are supposed to have non-public members with exposed setters and accesors. It is actually possible to program without exposing any data, although it is tedious. Properties confuse the fact in .NET. When I did java, I could have a member customerId, and a setCustomerId, and a getCustomerId and the world was perfect. In .NET if I don't do automatic properties, I really need someway to differentiate between customerId and CustomerId for the VB crowd; so I use m as member: mCustomerId.
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. I also do Android Programming as I find it a refreshing break from the MS. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost
-
Nish Sivakumar wrote:
accountNumber
, the latter clearly indicates what the type isYou've obviously not worked on some of the databases that I've had to support.
...
,AccountNumber VARCHAR(30) NOT NULL
...Though to your credit, hungarian notation probably wouldn't help things out, much. :)
Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
I used Hungarian notation in C++ all the time as it is close to the machine and so things like pAddress and ppAddress and dwAddress and lpdwAddress and wAddress, or sStructure and tTypedef etc are very useful distinctions. With higher level languages I tend not to annotate strings, and booleans are usually posed as a question like hasLoaded, isUserFemale. Though annotations to suggest private / protected variables like the underscore (_) I still find useful. We don't often have to consider whether something is 16, 32, or 64bits long though, and we rarely access raw pointers so pAddress and ppAddress have less use, and sStructure and tTypedef are generally just full classes at the detriment to performance. Also we use var a lot and let the compiler define the type so this is another factor. The next factor is the IDE, if the solution is fully integrated like in MS Visual Studio then you can easily go to the definition or hover over the variable so it is less necessary. Though as you say C&P'ing snippets elsewhere and you won't always know what they are though you can usually guess primitives and anything complex you will need the class/structure/typedef anyway. So I would say - carry on for C/C++, ASM etc but for higher level language where vars are often chosen at compile or even runtime then sometimes it is ok to leave it out.
-
You are not supposed to have "public" members. You are supposed to have non-public members with exposed setters and accesors. It is actually possible to program without exposing any data, although it is tedious. Properties confuse the fact in .NET. When I did java, I could have a member customerId, and a setCustomerId, and a getCustomerId and the world was perfect. In .NET if I don't do automatic properties, I really need someway to differentiate between customerId and CustomerId for the VB crowd; so I use m as member: mCustomerId.
Need custom software developed? I do custom programming based primarily on MS tools with an emphasis on C# development and consulting. I also do Android Programming as I find it a refreshing break from the MS. "And they, since they Were not the one dead, turned to their affairs" -- Robert Frost
Hi, Thanks for the enlightenment! I was almost sure that this was the case, although most of the examples I've seen say "m_CustomerId" instead of "mCustomerId". My confusion on this probably comes from interpreting the word "member" as meaning "property, method or field", which is probably incorrect. I write _customerId myself when I need a shadow field for a property called CustomerId. That also works well. Incidentally, that's how I wrote struct field names meant to be private - which wasn't possible to enforce back then - in C back in the early 80's. I think most people did - it became a habit to see it as a bug to touch something starting with _ unless you owned (had defined) it. So using that convention now feels quite familiar and natural. When I work with other people's code I generally don't have any problem with different styles, as long as ALL public identifiers (including parameters - camelCased) strictly follow the .NET guidelines. When you've chosen your environment, stick to its conventions - otherwise you're just confusing the people who use your stuff. Most people seem to follow them, though. I occassionally see camelCasing of classes, methods and properties, but that usually only happens when a web designer/javascript coder makes an occassional visit to .NET (and that code comes to me for refactoring anyway, as a matter of procedure)... Later,
-
DavidCrow wrote:
What type is it: numeric (long, int, double) or alphanumeric?
If I see
accountNumber
, I'd automatically assume anInt32
(.NET).Regards, Nish
My technology blog: voidnish.wordpress.com
Then you would be wrong if you were using data from my Database. I only use numeric data types for data on which you can do math. You would never want to "add" two account numbers - maybe some sort of concatenation, but never add. "You can't do today's job with yesterday's methods and be in business tomorrow." -- Anonymous So, "Never interrupt someone doing what you said couldn't be done." -- Amelia Earhart
-
Roger Wright wrote:
In every accounting system I've ever used, your assumption would be wrong. ;-P
Then perhaps, I'd rename that to accountIdentifier or accountId.
Regards, Nish
My technology blog: voidnish.wordpress.com
So far, no matter what, a great potential for confusion arose between CP members, imagine in between non-hamsters.
"To alcohol! The cause of, and solution to, all of life's problems" - Homer Simpson "Our heads are round so our thoughts can change direction." ― Francis Picabia
-
Yeah, account number was probably a bad choice for my example :-)
Regards, Nish
My technology blog: voidnish.wordpress.com
-
This has always bee my point of view on Hungarian. If you're using Hungarian to show you type info (i.e. compiler types, not semantic types), then you're using Hungarian wrong. Wrong Hungarian = bad. Right Hungarian = good. I currently don't trust anyone on my team to write "right Hungarian", though, so we outlaw it altogether at my workplace.
Right vs. Wrong: it's just not that simple. A lot of the things in the world of development are personal preferences. The starter of this thread likes Hungarian; you guys don't. No big deal. Personally, I find Hungarian to be very handy *a lot of the time*. Not always. I maintain that it's a bad idea to be hard and fast on topics that are really just "programmer prerogative". (I don't use notepad to read code any more but, jeez, if you're stuck with that ... Hungarian would be great! :)
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
VuNic wrote:
I just open it in notepad and try to figure out what datatypes all these variables are.
Well, I might not know why you're using notepad instead of VS. But couldn't you at least use Notepad++? I mean, you can't be doing this for nostalgia purposes only. Or could you?
"To alcohol! The cause of, and solution to, all of life's problems" - Homer Simpson "Our heads are round so our thoughts can change direction." ― Francis Picabia
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
Before writing the coding standard, I would recommend reading "Code Complete" which gives some very specific arguments about naming conventions and other things that would be in the standard. The good news is that no matter what, just HAVING the standard, even if it's not perfect, will go a long way. For our company which is largely focused on embedded projects using C, we adopted one that was already well developed (Michael Barr's Netrino embedded standard), bought a few hard copies for our reference, and made a one or two page document that says that's what we're using and what changes to it we're implementing (very few tweaks). Taking this route (adopting an already written standard) saved us a TON of time (read money) and took out some of the "persona preference" discussions. If you're using C#, why not adopt Microsoft's standard and call it a day? (I realize this isn't exactly what you were asking about.)
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
Back in my embedded C and MFC days I used to be big on hungarian notation. No more. These days I am working mostly with C# and stay away from prefixes as much as possible for a few reasons. 1. Readability As programmers we spend most of our time reading code not writing it. What's easier to read: sAccountNumber = CreateAccountNumber(nId, sLastName, wUniqueId); or accountNumber = CreateAccountNumber(id, lastName, uniqueId); 2. Maintenance Let's say I no longer want to use a string to store my account number. I want to encapsulate it inside a class. If you used accountNumber variable to begin with, you don't need to worry about renaming it. It's still just an accountNumber With modern compiler GUIs you really don't need to encode the variable type in it's name. You can hover with your mouse over it, and find out immediately whether it's an int or a string, or some other user defined type. In fact, you should try to avoid thinking about storage types as much as possible, and program at a higher level of abstraction. Only when you get to the low level code, take care of the types. I highly recommend Robert C. Martin's "Clean Code"[^], particularly Chapter 2: Meaningful Names apply to this discussion.
"There are only 10 types of people in the world - those who know binary and those who don't."
-
I understand #2 but it's a wee bit silly. How often do you change a variable type - and even if you did the IDE asks to change all of 'em for you. It takes 3 seconds.
It occurs more often than you might be aware.
If your actions inspire others to dream more, learn more, do more and become more, you are a leader." - John Quincy Adams
You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering” - Wernher von Braun -
DavidCrow wrote:
What type is it: numeric (long, int, double) or alphanumeric?
If I see
accountNumber
, I'd automatically assume anInt32
(.NET).Regards, Nish
My technology blog: voidnish.wordpress.com
Ah, yes, the assumption factor. When often engaged upon, oft required is the element of REfactoring! These might be more specific: thisVarIsASignedIntegerBuddyBoy dontUseThisVarToStoreNumbersBuddyBoy noSpacesOrUnderlinesAllowedBuddyBoy :wtf:oh_AndHereIsAnExceptionColumnForYouBuddetteGals_JustInCase :wtf: ;P
The best way to improve Windows is run it on a Mac. The best way to bring a Mac to its knees is to run Windows on it. ~ my brother Jeff
-
I'm about to write the coding standards doc for a team. I've been using Hungarian notation ever since I stated coding. The blogs that I read online rant against use of Hu system on OO languages, but I have few questions: Though it's C++ or C#, we do have primitive data types everywhere. In fact, for smaller projects, primitive data types would account for 90% of the variables. Now I'm dealing with a lot of nummbers , flags & so on. How do I know what datatype it is? For example, the code is a 100K line code and I cannot copy the entire project to my disk to review that at home. I choose to copy a 300 lines code block with multiple functions to review it at home. I just open it in notepad and try to figure out what datatypes all these variables are. No where I can figure out this. Then why the heck everybody rants against this convention? I'm going ahead insisting on sticking with the Hu notion. If anybody has a valid reason against it, I'm all ears to it. (if you dont like Hu notation, please dont express it with the 1 vote here :sigh: )
Starting to think people post kid pics in their profiles because that was the last time they were cute - Jeremy.
I prefere Hunarian notation for the same reasons. We don't always have an IDE handy when reviewing code. I use "m_" for module-level scoped variables, and no prefix for variables local to a given method or property. So, what prefixes would you use for the following (and their array counterparts where arrays make sense): Int16 Int32 Int64 UInt16 UInt32 UInt64 String DateTime Single Double Boolean Dictionary/Dictionary<> List/List<> or a variety of other objects? Your choices could be helpful to us Neanderthals still using Hungarian notation?