Whose idea was this in C\?
-
Awesome to see such productive back and forth comments. Seemed to border on getting offtrack, but was rescued at the last moment. Great work guys!
-
I'm unaware of any way that
auto
helps to produce more efficient code. If you don't want a deep copy, you have to writeauto& id =
.... And if it'sconst
, you have to writeconst auto id =
.... Yes, Herb Sutter wrote an article "Almost Always Auto (AAA)." As far as someone reading the code having to look up the type returned by a function goes, my take is that knowing the type is only the first step to understanding. The reader also needs to know the function's purpose, which means reading its interface documentation. And given that we sometimes fall short on that front, it can also mean reading its implementation. Providing the type can therefore be detrimental by giving a false sense of security. Although I haven't used C#, I'd be surprised if best practices for when to use the heap versus the stack weren't the same in both languages.Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing.Greg, thank you for the reply. It looks like the reason auto makes this more efficient that I was recalling was wrong. Although my conclusion is somewhat the same. Rereading the AAA article, I picked out the point:
Quote:
It is efficient by default and guarantees that no implicit conversions (including narrowing conversions), temporary objects, or wrapper indirections will occur. In particular, prefer using auto instead of function<> to name lambdas unless you need the type erasure and indirection.
The same is true of C# if one uses implicit conversions as can be seen in the following code:
public class T1 {
int m_count;public T1(int cnt) { m\_count = cnt; } public static implicit operator T2(T1 t1) { return new T2(t1.m\_count.ToString()); }
}
public class T2 {
string m_count;
public T2(string count) {
m_count = count;
}
}public class Class1
{
T1 GenerateValue() {
return new T1(22);
}void temp() { // The following creates an instance of T1, converts it to T2 and // leaves T1 for the GC to clean up. // Is it clear to a developer that they just created this overhead? T2 t2\_implicit = GenerateValue(); // The following creates and holds an instance of T1 T1 t1\_explicit = GenerateValue(); // The following also creates and holds an instance of T1 var t1\_implicit = GenerateValue(); }
}
This means my original statement was in error. The reason we "benefit" from not using var has little to do with us not using structs. It is because we don't use implicit conversions on the vast majority of our classes/structs. So the potential "mistake" of accidentally forcing an unnecessary type conversion is minimal and is outweighed by putting type information at the developers fingertips.
Quote:
Although I haven't used C#, I'd be surprised if best practices for when to use the heap versus the stack weren't the same in both languages.
The thing that makes C# different from C++ in this case is that all memory created to hold instances of a class cannot be placed on the stack. Class instance memory is always placed on the heap and managed by the garbage collector (GC). On the other hand, a struct is always considered a value type and storage follows the same as it would for
-
Greg, thank you for the reply. It looks like the reason auto makes this more efficient that I was recalling was wrong. Although my conclusion is somewhat the same. Rereading the AAA article, I picked out the point:
Quote:
It is efficient by default and guarantees that no implicit conversions (including narrowing conversions), temporary objects, or wrapper indirections will occur. In particular, prefer using auto instead of function<> to name lambdas unless you need the type erasure and indirection.
The same is true of C# if one uses implicit conversions as can be seen in the following code:
public class T1 {
int m_count;public T1(int cnt) { m\_count = cnt; } public static implicit operator T2(T1 t1) { return new T2(t1.m\_count.ToString()); }
}
public class T2 {
string m_count;
public T2(string count) {
m_count = count;
}
}public class Class1
{
T1 GenerateValue() {
return new T1(22);
}void temp() { // The following creates an instance of T1, converts it to T2 and // leaves T1 for the GC to clean up. // Is it clear to a developer that they just created this overhead? T2 t2\_implicit = GenerateValue(); // The following creates and holds an instance of T1 T1 t1\_explicit = GenerateValue(); // The following also creates and holds an instance of T1 var t1\_implicit = GenerateValue(); }
}
This means my original statement was in error. The reason we "benefit" from not using var has little to do with us not using structs. It is because we don't use implicit conversions on the vast majority of our classes/structs. So the potential "mistake" of accidentally forcing an unnecessary type conversion is minimal and is outweighed by putting type information at the developers fingertips.
Quote:
Although I haven't used C#, I'd be surprised if best practices for when to use the heap versus the stack weren't the same in both languages.
The thing that makes C# different from C++ in this case is that all memory created to hold instances of a class cannot be placed on the stack. Class instance memory is always placed on the heap and managed by the garbage collector (GC). On the other hand, a struct is always considered a value type and storage follows the same as it would for
That's interesting how C# treats classes and structs differently. As far as implicit type conversion in C++ goes, a constructor that takes a single argument can be tagged
explicit
to the avoid unintended creation of an object. I've rarely used implicit construction because it can make the code opaque.Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing. -
I'm probably missing a joke or 2 here, but I'm having a hard time seeing the benefit of auto-typing. Since you 100%, always, with 0 possible exceptions know the complete type when you're writing the code.. Why would you want to obscure it? It's like having both a pig, and some lipstick, and feeling compelled to apply the latter to the former just because you can. :rose:
When using templates in C++, the type can be a mess, so it looks like a pig. Using
auto
also lets me keep lines to 80 characters while rarely having to spill them. So the pig disappears, and all that's left is the lipstick. Call it the Cheshire Pig!Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing. -
Did I ever mention why I hated MSFT? and most of their products? THIS. Exactly This! The last straw was when they stopped adding changes to the 16 Bit C/C++ compiler that they were putting into the 32 bit version. Our lead dev made a 32 bit library that we were forced to write a Thunking layer to use. He used almost every new feature he could. In the end, I forcibly recompiled the code using a Borland 16 bit compiler. The ONE thing I LIKED about Oracle was that for DECADES we would simply DUMP our DB and Code. Import it into a newer version, and it worked. Hundreds of upgrades, and we barely ever ran into something that no longer compiled. Something I can honestly say NEVER happened with MSFT stuff. From VB breaking every version, to the above, to MSSQL T-SQL changes. (Heck, SqlCmd has a :Connect command. Try to use it in Azure hosting! Because it does NOT support choosing the Database. So it fails. It's as if ONE HAND has no idea what the other is doing). Good ideas are great... But going in to make a small change to a system, and finding out you cannot even begin to recompile it because of the new compiler. Imagine if Linux was built on those precepts! I feel your pain!
I used to criticize MSFT. Then I built a few projects with NPM libraries and felt the true pain of uncoordinated independently maintained software tools and libraries from everyone wanting to contribute a weekend project and then move on. Suddenly MSFT seemed like the best thing every. One company to address security and ensure all of your dependencies are upgraded at the same time and work together is heaven compared to the past several years of "free" libraries. Who am I kidding, I still criticize MSFT. But a lot less now.
-
var should be banished to the depths of time! If you can't explicitly type the variable then you obviously don't understand the problem.
Var should be used whenever the type is obvious from the RHS. Most of the time, nobody cares about the specific type of a variable, yet explicitly declaring the type forces maintainers to read it 100% of the time. It's also less DRY: it would be silly to say, "Today I washed my car today". The counterpart is a newer feature that lets us instantiate without the explicit type, which arguably should only be used when the type is obvious: private Dictionary> _lookupTable = new();
-
I'm always willing to play the straight man! The main problem with
auto
is that the type ends up being something that you need to consider more carefully:for(auto i = container.size() - 1; i >= 0; --i)...
and you've got an infinite loop because
i
is unsigned. I've been burned by this a few times, so have learned to writeint
instead ofauto
here. But that overrides the type; "correctly" sayingsize_t
would cause the same problem.Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing.I'd be more likely to use an iterator for that case. Besides, doesn't the compiler generate a warning/error? It should know that testing an unsigned against all positive integers always returns true and seems common enough for them to catch it. Where Auto/Var shine for me is that it's more readable in the more normal SomeObject object = new SomeObject(Parameters) case.
-
I'd be more likely to use an iterator for that case. Besides, doesn't the compiler generate a warning/error? It should know that testing an unsigned against all positive integers always returns true and seems common enough for them to catch it. Where Auto/Var shine for me is that it's more readable in the more normal SomeObject object = new SomeObject(Parameters) case.
I usually use an iterator for that too but wanted to come up with an example quickly. And you're right that the compiler will probably give a warning.
Robust Services Core | Software Techniques for Lemmings | Articles
The fox knows many things, but the hedgehog knows one big thing. -
You can tell it not to check, but that's just delaying the problem - and potentially causing more confusion if someone reuses the code for a similar purpose and it then throws up (pretty much incomprehensible) errors on identical code ... And I suspect it'll be worse for "newer coders" since they all seem to use
var
exclusively instead of explicit typing, so the actual type of a variable will change and throw up yet more errors later on ..."I have no idea what I did, but I'm taking full credit for it." - ThisOldTony "Common sense is so rare these days, it should be classified as a super power" - Random T-shirt AntiTwitter: @DalekDave is now a follower!
In what way would the underlying type change? I believe that it is a change solely to the warnings/errors produced by the compiler. Although I guess some code copied from Stack Overflow may break ;) I've wanted this since C# 1.0. Any run-time error you can prevent at compile time is something that I'm for, although with all the semantic sugar they're adding, I'm worried about diabetes. I agree it's best for greenfield, I'll bet most older projects aren't updated. I wish they had made it optional by a postfix ! instead the way they did way back when with a preprocessor, but I see that acting a lot like the const poisoning that happens with C++ when you make something const so I understand why they just yanked off the band-aid. Ralph
-
I looks like they put some thought into this. You may need to make some settings changes, etc. Nullable reference types | Microsoft Docs[^]
Yep, as he said you can set an option. Yep, it's a breaking (unbreaking by config) change. Their heart is certainly in the right place in trying to catch issues as you code. I can't think of a better way to go about it and they DID us notice a starting a couple of releases back that it was coming. I know that doesn't help your brief WTF shock. :-) As someone further on said, a reasonable idea and leads to less breakable code and the fix isn't tough in the singular instance, but it depends on how many instances you have how big of a task to make the changes. Good luck!