Okay, real C++ question - who makes use of the auto keyword?
-
As already mentioned in several answers, auto keyword doesn't make C++ code less strong-typed or type-safe. So, using auto is individual preference. In some cases (such as templates, lambda) there is no other choice. When auto is optional, I always use it for complicated types, like container iterators. I also like auto in container enumeration code:
for(const auto& x: my_container)
{
// ...
}As for local variables, it depends. Sometimes we want the variable to have another type. If the variable must have the same type, as expression, auto can help, when the code is changed:
std::vector v;
// ...
short n = v[0];Consider the situation, when we change the container type to int:
std::vector v;
// ...
short n1 = v[0]; // oops, variable type should be changed as well
auto n2 = v[0]; // this is OK
decltype(v)::value_type n3 = v[0]; // this is also OK, though I don't like itI find myself using auto more and more. Sometimes, when I want to see exactly, what I am doing, I prefer to use an explicit type.
I clearly have a limited understanding of C++. I admittedly come from a C background, and I have embraced the general concepts of C++ (most of the 4 pillars). But I'm going to be honest here :) It seems to me that auto is fixing or making easier to use some of the more spurious features of C++. Just a general thought, but it gets back to my original post/question. For example, your comment: "decltype(v)::value_type n3 = v[0];" means absolutely nothing to me. I'm at the level of wtf? So, I went out to the internet and read: "Inspects the declared type of an entity or the type and value category of an expression." for decltype. I still don't know what that means. Are we off in the top 0.01% land of coding? It's okay, I found my niche long ago, but seriously, it feels like so many special features have been added that only apply to the religious fanatics of code land.
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
-
I clearly have a limited understanding of C++. I admittedly come from a C background, and I have embraced the general concepts of C++ (most of the 4 pillars). But I'm going to be honest here :) It seems to me that auto is fixing or making easier to use some of the more spurious features of C++. Just a general thought, but it gets back to my original post/question. For example, your comment: "decltype(v)::value_type n3 = v[0];" means absolutely nothing to me. I'm at the level of wtf? So, I went out to the internet and read: "Inspects the declared type of an entity or the type and value category of an expression." for decltype. I still don't know what that means. Are we off in the top 0.01% land of coding? It's okay, I found my niche long ago, but seriously, it feels like so many special features have been added that only apply to the religious fanatics of code land.
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
I also prefer C over C++, and decltype example was kind of joke. Bad joke, I guess. In any case: decltype(v) means: type of v variable, vector of int in this case. vector type has value_type typedef, defined as T, see here: std::vector - cppreference.com[^] So, this ridiculous (for anyone, except C++ snobs) line is translated by compiler to int, i.e. vector template parameter.
-
I also prefer C over C++, and decltype example was kind of joke. Bad joke, I guess. In any case: decltype(v) means: type of v variable, vector of int in this case. vector type has value_type typedef, defined as T, see here: std::vector - cppreference.com[^] So, this ridiculous (for anyone, except C++ snobs) line is translated by compiler to int, i.e. vector template parameter.
(for anyone, except C++ snobs) Now I need to clean my screen - just spit all over it laughing. honestly, I did make the comment that there are people out there that code at a level I cannot even comprehend. I've come to call them "code witches" <--- I'm waiting to see if anyone follows the reference ;) I read your description of what decltype does and I think, "hmm, I need to pass gas." It's almost like some of the new "features" and auto is not new - 2010 ish raise areas of C++ to a meta programming language on it's own. Macros on steroids or something.
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
-
That does not change anything, it's a cast. It merely tells the compiler "even though
s
is achar*
, for this statement only, pretend it points to anint
.I believe that in terms of the semantic functionality that the type is now changed. If you have a method that takes the second type, the compiler will complain if you pass the first but not the second. I have in fact used a char array as a integer before. At least in that case there was no definable difference between the two. So exactly, in terms of the language, how does the cast not make it into a different type?
-
I believe that in terms of the semantic functionality that the type is now changed. If you have a method that takes the second type, the compiler will complain if you pass the first but not the second. I have in fact used a char array as a integer before. At least in that case there was no definable difference between the two. So exactly, in terms of the language, how does the cast not make it into a different type?
Well, think of it this way: What is a type? What do we mean when we declare the type of a variable? We're declaring how we want the compiler to treat the data value. It's not an existential property of the variable, it's the way that we interpret the value. So:
char* b = "ABCD"; // means treat b as a char pointer
And:
int* a = (int*)b; // means please treat b as a int pointer just this one time
We're declaring an action, not a property of the variable.
The difficult we do right away... ...the impossible takes slightly longer.
-
I believe that in terms of the semantic functionality that the type is now changed. If you have a method that takes the second type, the compiler will complain if you pass the first but not the second. I have in fact used a char array as a integer before. At least in that case there was no definable difference between the two. So exactly, in terms of the language, how does the cast not make it into a different type?
A
char*
is in reality just an index into a portion of memory. So at the machine level it has no type-ness, it can be used to address anything from a byte to a quadword. But as far as the language is concerned it only ever points to a character. When you use a cast the compiler does what can be done at machine level, but the object itself is still achar*
, and any attempt to use it in any other way will raise a compiler error. If you have something like the following:int somefunction(int* pi, int count)
{
int sum = 0;
for (int i = 0; i < count; ++i)
{
sum += *pi;
}
return sum;
}// and you call it like this
char* ci = "Foobar";
int total = somefunction((int*)ci, strlen(ci));The type of
ci
does not change at all, it is just that its value is passed tosomefunction
, as the cast allows you to break or ignore the rules of the language. And the result of calling that function may, or may not, make sense. -
A
char*
is in reality just an index into a portion of memory. So at the machine level it has no type-ness, it can be used to address anything from a byte to a quadword. But as far as the language is concerned it only ever points to a character. When you use a cast the compiler does what can be done at machine level, but the object itself is still achar*
, and any attempt to use it in any other way will raise a compiler error. If you have something like the following:int somefunction(int* pi, int count)
{
int sum = 0;
for (int i = 0; i < count; ++i)
{
sum += *pi;
}
return sum;
}// and you call it like this
char* ci = "Foobar";
int total = somefunction((int*)ci, strlen(ci));The type of
ci
does not change at all, it is just that its value is passed tosomefunction
, as the cast allows you to break or ignore the rules of the language. And the result of calling that function may, or may not, make sense.In your example, it should be noted that if the target CPU requires that an
int
have, for example, an even byte alignment, you may get an exception when trying to dereference theint
pointer. I also wondered if you meant to increment the int pointer inside the loop, in which case, at some point you would invoke undefined behavior. Unless, of course,sizeof(int) == sizeof(char)
. Which isn't impossible, but I don't know of any system where that might be true. Maybe a 6502 or other 8bit system?"A little song, a little dance, a little seltzer down your pants" Chuckles the clown
-
In your example, it should be noted that if the target CPU requires that an
int
have, for example, an even byte alignment, you may get an exception when trying to dereference theint
pointer. I also wondered if you meant to increment the int pointer inside the loop, in which case, at some point you would invoke undefined behavior. Unless, of course,sizeof(int) == sizeof(char)
. Which isn't impossible, but I don't know of any system where that might be true. Maybe a 6502 or other 8bit system?"A little song, a little dance, a little seltzer down your pants" Chuckles the clown
-
A
char*
is in reality just an index into a portion of memory. So at the machine level it has no type-ness, it can be used to address anything from a byte to a quadword. But as far as the language is concerned it only ever points to a character. When you use a cast the compiler does what can be done at machine level, but the object itself is still achar*
, and any attempt to use it in any other way will raise a compiler error. If you have something like the following:int somefunction(int* pi, int count)
{
int sum = 0;
for (int i = 0; i < count; ++i)
{
sum += *pi;
}
return sum;
}// and you call it like this
char* ci = "Foobar";
int total = somefunction((int*)ci, strlen(ci));The type of
ci
does not change at all, it is just that its value is passed tosomefunction
, as the cast allows you to break or ignore the rules of the language. And the result of calling that function may, or may not, make sense.Richard MacCutchan wrote:
If you have something like the following:
For background I have 10 years of C and 15 of C++ after that so I do understand a bit of how it works. Not to mention wild forays into assembler, interpreters, compilers, compiler theory and spelunking through compiler libraries. I have written my own heaps (memory management), my own virtual memory driver, device drivers and hardware interfaces. So I do understand quite a bit about how computer languages work and how the language is processed. I have used char arrays as ints. I have used char arrays as functions. I have used void* to hide underlying data types. I have used void* in C to simulate C++ functionality.
Richard MacCutchan wrote:
When you use a cast the compiler does what can be done at machine level, but the object itself is still a char*, and any attempt to use it in any other way will raise a compiler error.
That specifically is not true. Once a char* is cast to an int (or int*) then the compiler specifically and exclusively treats it as that new type. The question is not how it is used but rather how it is defined to the compiler.
Richard MacCutchan wrote:
And the result of calling that function may, or may not, make sense.
All of those worked because the compiler did what it was told. The cast changed the type. The compiler respected the type and it did not and does not maintain information about previous types. A human understands that the underlying data originated from a character array. However the compiler does what it is told. And once it is cast to a different type it is in fact a different type to the compiler. By definition. You, the human, can use it incorrectly but you (again the human) can use the original type incorrectly as well. That has nothing to do with the cast but rather how the human uses it. The easiest way, perhaps only way, for a language to preserve type is to not allow the type to be changed at all. Java and C# do that. Going back to what was originally said by you. "pretend it points to an int" The compiler is not doing that. To the compiler once the cast occurs the data is now the new type. Whether that is a problem or not is a human problem, not a compiler problem. For the compiler to be involved in this at all the underlying data would need
-
Richard MacCutchan wrote:
If you have something like the following:
For background I have 10 years of C and 15 of C++ after that so I do understand a bit of how it works. Not to mention wild forays into assembler, interpreters, compilers, compiler theory and spelunking through compiler libraries. I have written my own heaps (memory management), my own virtual memory driver, device drivers and hardware interfaces. So I do understand quite a bit about how computer languages work and how the language is processed. I have used char arrays as ints. I have used char arrays as functions. I have used void* to hide underlying data types. I have used void* in C to simulate C++ functionality.
Richard MacCutchan wrote:
When you use a cast the compiler does what can be done at machine level, but the object itself is still a char*, and any attempt to use it in any other way will raise a compiler error.
That specifically is not true. Once a char* is cast to an int (or int*) then the compiler specifically and exclusively treats it as that new type. The question is not how it is used but rather how it is defined to the compiler.
Richard MacCutchan wrote:
And the result of calling that function may, or may not, make sense.
All of those worked because the compiler did what it was told. The cast changed the type. The compiler respected the type and it did not and does not maintain information about previous types. A human understands that the underlying data originated from a character array. However the compiler does what it is told. And once it is cast to a different type it is in fact a different type to the compiler. By definition. You, the human, can use it incorrectly but you (again the human) can use the original type incorrectly as well. That has nothing to do with the cast but rather how the human uses it. The easiest way, perhaps only way, for a language to preserve type is to not allow the type to be changed at all. Java and C# do that. Going back to what was originally said by you. "pretend it points to an int" The compiler is not doing that. To the compiler once the cast occurs the data is now the new type. Whether that is a problem or not is a human problem, not a compiler problem. For the compiler to be involved in this at all the underlying data would need
-
Richard MacCutchan wrote:
If you have something like the following:
For background I have 10 years of C and 15 of C++ after that so I do understand a bit of how it works. Not to mention wild forays into assembler, interpreters, compilers, compiler theory and spelunking through compiler libraries. I have written my own heaps (memory management), my own virtual memory driver, device drivers and hardware interfaces. So I do understand quite a bit about how computer languages work and how the language is processed. I have used char arrays as ints. I have used char arrays as functions. I have used void* to hide underlying data types. I have used void* in C to simulate C++ functionality.
Richard MacCutchan wrote:
When you use a cast the compiler does what can be done at machine level, but the object itself is still a char*, and any attempt to use it in any other way will raise a compiler error.
That specifically is not true. Once a char* is cast to an int (or int*) then the compiler specifically and exclusively treats it as that new type. The question is not how it is used but rather how it is defined to the compiler.
Richard MacCutchan wrote:
And the result of calling that function may, or may not, make sense.
All of those worked because the compiler did what it was told. The cast changed the type. The compiler respected the type and it did not and does not maintain information about previous types. A human understands that the underlying data originated from a character array. However the compiler does what it is told. And once it is cast to a different type it is in fact a different type to the compiler. By definition. You, the human, can use it incorrectly but you (again the human) can use the original type incorrectly as well. That has nothing to do with the cast but rather how the human uses it. The easiest way, perhaps only way, for a language to preserve type is to not allow the type to be changed at all. Java and C# do that. Going back to what was originally said by you. "pretend it points to an int" The compiler is not doing that. To the compiler once the cast occurs the data is now the new type. Whether that is a problem or not is a human problem, not a compiler problem. For the compiler to be involved in this at all the underlying data would need
What you have is a set of bits, commonly called a byte/octet, a halfword, a word, ... You declare an interpretation of the bit pattern as a character. You declare an alternative interpretation of the same bit pattern as a small integer. You might declare a third interpretation of it as, say, an enumeration variable. You can declare as many alternate interpretations of the bit pattern as you like. The various interpretations are independent and coexistent. It is the same bit pattern all the time, nothing changes. Unless, of course, you call for a function that interprets the bit pattern in one specific way, and the creates another bit pattern that can be interpreted as something resembling the first interpretation made by the function. Say, the function interprets the bit pattern as an integer, and forms a bit pattern that, if interpreted as a floating point value, has an integer part equal to the integer interpretation value of the first bit pattern. Yet, even the constructed bit pattern is nothing more than a bit pattern, that can be given arbitrary other interpretations. When you declare a typed variable / pointer / parameter, you are just telling the compiler: When I use this symbol to refer to the bit pattern, it should be interpreted so-and-so. The compiler will see to that, without making any modifications to the bit pattern, and - at least in some languages - making no restrictions on other interpretations. A problem with some languages is that in some cases, a cast will just declare another interpretation of the bit pattern, while in other cases (other interpretations), it will create a new bit pattern. If you want full control, always interpret the same bit pattern, never create a new one, a union is a good alternative to casting. Besides, by declaring a union, you signal to all readers of the code: Beware - this bit pattern is interpreted in multiple ways! Casting can be done behind your back, risking e.g. that a variable with limited range (e.g. an enumeration) is given an illegal value. With a union, you will be aware of this risk.
Religious freedom is the freedom to say that two plus two make five.
-
What you have is a set of bits, commonly called a byte/octet, a halfword, a word, ... You declare an interpretation of the bit pattern as a character. You declare an alternative interpretation of the same bit pattern as a small integer. You might declare a third interpretation of it as, say, an enumeration variable. You can declare as many alternate interpretations of the bit pattern as you like. The various interpretations are independent and coexistent. It is the same bit pattern all the time, nothing changes. Unless, of course, you call for a function that interprets the bit pattern in one specific way, and the creates another bit pattern that can be interpreted as something resembling the first interpretation made by the function. Say, the function interprets the bit pattern as an integer, and forms a bit pattern that, if interpreted as a floating point value, has an integer part equal to the integer interpretation value of the first bit pattern. Yet, even the constructed bit pattern is nothing more than a bit pattern, that can be given arbitrary other interpretations. When you declare a typed variable / pointer / parameter, you are just telling the compiler: When I use this symbol to refer to the bit pattern, it should be interpreted so-and-so. The compiler will see to that, without making any modifications to the bit pattern, and - at least in some languages - making no restrictions on other interpretations. A problem with some languages is that in some cases, a cast will just declare another interpretation of the bit pattern, while in other cases (other interpretations), it will create a new bit pattern. If you want full control, always interpret the same bit pattern, never create a new one, a union is a good alternative to casting. Besides, by declaring a union, you signal to all readers of the code: Beware - this bit pattern is interpreted in multiple ways! Casting can be done behind your back, risking e.g. that a variable with limited range (e.g. an enumeration) is given an illegal value. With a union, you will be aware of this risk.
Religious freedom is the freedom to say that two plus two make five.
trønderen wrote:
while in other cases (other interpretations), it will create a new bit pattern
Keeping in mind of course that at least here the discussion is about C/C++ (the forum.) C++ can do that since it supports operator overloading. But not as far as I know for native types. Even with operator overloading though once the cast operation happens the compiler does consider that a new type is in play.
-
trønderen wrote:
while in other cases (other interpretations), it will create a new bit pattern
Keeping in mind of course that at least here the discussion is about C/C++ (the forum.) C++ can do that since it supports operator overloading. But not as far as I know for native types. Even with operator overloading though once the cast operation happens the compiler does consider that a new type is in play.
I didn't consider syntactic sugar, such as operator overloading. Alternate interpretations of a given bit pattern can be done with old style simple operators, overloaded operators, methods argument preparation, explicit casting, ... The essential point is not the wordiness of the syntax, but that the bit pattern is not changed. We have just added another interpretation, regardless of which coding syntax we use for making that interpretation. (It isn't limited to C/C++ - rather, C/C++ is limited in alternate interpretations.) I really hate it when people 'explaining' computers tell that 'Inside the computer, everything is numbers. Say, the letter 'A' is 65 inside the computer'. Noooo!!! Inside the computer is a bit pattern that has no predefined, "natural" interpretation as a number! Sure, you can interpret it numerically, and divide it by two to show that half an 'A' is space - but that is plain BS. It is like uppercasing the value 123456789! Sometimes, it is useful to make alternate interpretations. E.g. a 24-bit value intended to be interpreted as a color, human eyes cannot determine whether two colors are identical (maybe the screen isn't capable of resolving 16 million colors, either). In an alternate interpretation, as a three-part RGB value 0-255, we can very easily see if the colors are identical or not. But that doesn't mean the color 'really' is numbers - no more from the screen than from the rose in the vase next to it. Both the reds are red - not 255,0,0! RGB values are 'false' interpretations (i.e. deviating from interpretation assumed by the photo editor) to help us humans with limited color-interpreting abilities.
Religious freedom is the freedom to say that two plus two make five.
-
I'm trying to straighten out some pointer "arithmetic" in some existing code. The expressions themselves are overly ambiguous to say the least. In part of my readings, I've come across the "auto" keyword where the compiler deduces what type I need. At least that's what I got from all of the verbiage. This seems a) dangerous and b) adds another level of mental indirection to what you are trying to accomplish. To me, software needs to be very clear and explicit in what data you are working with and what you intend to do with it. A lot of the "here is how auto will help you" descriptions justify it by saving typing when trying to make use of other classes, templates and the like. It feels like the C++ committee came up with a feature A then added feature B to make using A easier. I'm now doing battle with lambda expressions - another story. So, in your code - do you make extensive use of auto, and how does it help you?
Charlie Gilley “They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759 Has never been more appropriate.
There is a time and a place for it, and it's sometimes useful when doing some heavy Generic Programming. Like, in theory if you had to design your own tuple type (I know std already has one, but ignoring that), the function to access a tuple's value might be an auto because it's difficult to even type out the template instantiation necessary for the return type, much less come up with it. Another place I use it: In my graphics library you can define pixels with an arbitrary memory footprint. Different amounts of bits for different channels, like RGB565 or YUV888 etc. Because of the arbitrary nature of it the integer values for each channel may be a different type. For example, while a channel probably won't be more than a uint8_t can hold (8-bits) it might be (12 bits? uint16_t would be necessary) Because of that, when I go to assign values from arbitrary pixel formats I don't actually *know* what type it is, other than some kind of integer of 64 bits or less (based on static_assert constraints). So I could always promote it to a uint64_t but that creates other problems when you have to cast down again. So auto is what's for dinner.
Check out my IoT graphics library here: https://honeythecodewitch.com/gfx And my IoT UI/User Experience library here: https://honeythecodewitch.com/uix