Question to ask an interviewee
-
http://www.newinterviewquestions.com/list.htm[^] This may help according to a job category.
Regards, Mushq
If the "approval" for answers and the english correction for the whole site is the same as in this question[^] I wouldn't trust on it. What is a "arithematic progression" and a "gemotric mean". I know about "arithmetic progression" and "geometric mean". BTW... even the "typos" how can be the therms in an arithmetic progression bigger than the result as they are in the "approved" answer? And another jewel... see[^]
Greetings. -------- M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you “The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet.” - Michael A. Jackson Rating helpfull answers is nice, but saying thanks can be even nicer.
modified on Friday, May 23, 2008 12:41 PM
-
RHS expression is evaluated to 0, then i is incremented, but the RHS expression value is assigned back to i.
The question I meant was not what it does (I saw that for myself), but why it does it. The 'definition' of the post-increment operator is to use it's current value and then increment it. 1) The RHS is copied to the LHS. - They are now both the same value. 2) The RHS is incremented. There is no concieveable reason why it should have anything more to do with the LHS (as I see it), since the data mapping for the LHS is complete. The RHS is incremented and that should be the last step. In C and C++, it is. It's C#'s behaviour that makes no sense. My thought that it has to do with the )&)^()&)& managed memory for even simple types may be the problem in that it inserted another step (I guess): It did it's arithmetic in some other place in memory and then, after the increment, it decided all it needs to do with the statement has been done. It then copies the value it stored (in an address other than that of i into i. If I'm correct, this behavior is an artifact of the managed memory - and is incorrect according to how the code was written. I will say this - it's very interesting and (to me) a sound warning that C#'s behaviour can be (apparently) unpredictable.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadol -
Hi all, What would you consider a good question to ask to a potential programmer to see whether he/she is worth their salt? Thanks in advance for any suggestions Regards,
The only programmers that are better that C programmers are those who code in 1's and 0's :bob: :)Programm3r My Blog: ^_^
My lead and I always ask "how would you send data between two processes?" For us, if you don't know Win32 and multi-threading, we don't want to waste out time. My broader point is to ask questions directly related to why you need this person. Ask for specific examples of them solving the same types of problems you need solved. Pay attention to the basic vocabulary, not necessarily the solution itself (which may have caveats that don't apply to your situation.) (When I am the one being interviewed, if they don't ask targeted questions like this, I question whether they know why their hiring someone. Several times, it's become apparent that they wanted someone with a different skill set, but were desperate, clueless or both.)
Anyone who thinks he has a better idea of what's good for people than people do is a swine. - P.J. O'Rourke
-
Programm3r wrote:
What would you consider a good question to ask to a potential programmer to see whether he/she is worth their salt?
Some of common Microsoft interview questions[^] are pretty good.
I disagree. I find those questions pointless. They tell you nothing, but we all pretend they do. (One reason they drive me nuts is that for whatever reason, I need to be using a computer for my mind to get into the right flow of programming. I've always been that way--I just don't write good code on paper/whiteboard [note that I said code, not algorithms; just occurred to me that asking the person to outline the algorithm for a task is much more useful.])
Anyone who thinks he has a better idea of what's good for people than people do is a swine. - P.J. O'Rourke
-
The question I meant was not what it does (I saw that for myself), but why it does it. The 'definition' of the post-increment operator is to use it's current value and then increment it. 1) The RHS is copied to the LHS. - They are now both the same value. 2) The RHS is incremented. There is no concieveable reason why it should have anything more to do with the LHS (as I see it), since the data mapping for the LHS is complete. The RHS is incremented and that should be the last step. In C and C++, it is. It's C#'s behaviour that makes no sense. My thought that it has to do with the )&)^()&)& managed memory for even simple types may be the problem in that it inserted another step (I guess): It did it's arithmetic in some other place in memory and then, after the increment, it decided all it needs to do with the statement has been done. It then copies the value it stored (in an address other than that of i into i. If I'm correct, this behavior is an artifact of the managed memory - and is incorrect according to how the code was written. I will say this - it's very interesting and (to me) a sound warning that C#'s behaviour can be (apparently) unpredictable.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadolBalboos wrote:
I will say this - it's very interesting and (to me) a sound warning that C#'s behaviour can be (apparently) unpredictable.
It's hardly unpredictable if it's specified:
14.5.9 Postfix increment and decrement operators
…
The run-time processing of a postfix increment or decrement operation of the form x++ or x-- consists of
the following steps:
If x is classified as a variable:
a) x is evaluated to produce the variable.
b) The value of x is saved.
c) The saved value of x is converted to the operand type of the selected operator and the operator is invoked with this value as its argument.
d) The value returned by the operator is converted to the type of x and stored in the location given by the evaluation of x.
e) The saved value of x becomes the result of the operation. -
Hi all, What would you consider a good question to ask to a potential programmer to see whether he/she is worth their salt? Thanks in advance for any suggestions Regards,
The only programmers that are better that C programmers are those who code in 1's and 0's :bob: :)Programm3r My Blog: ^_^
My favorite interview moment. I was with a bunch of random developers at Citrix Draper. One guy asked, "Show me how you would reverse a string in C++." I stood, went to the white board and wrote "strrev(pStr);" To this day, I not only think it's the right answer, but it's the answer I want to see since it shows common sense. (In the end, I wrote a silly version of string reverse.) Annoying interview moment: I was asked wrote some code. I did. I made an error. The interviewer corrected my error, but in doing so, made an even worse error.
Anyone who thinks he has a better idea of what's good for people than people do is a swine. - P.J. O'Rourke
-
If the "approval" for answers and the english correction for the whole site is the same as in this question[^] I wouldn't trust on it. What is a "arithematic progression" and a "gemotric mean". I know about "arithmetic progression" and "geometric mean". BTW... even the "typos" how can be the therms in an arithmetic progression bigger than the result as they are in the "approved" answer? And another jewel... see[^]
Greetings. -------- M.D.V. ;) If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about? Help me to understand what I'm saying, and I'll explain it better to you “The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet.” - Michael A. Jackson Rating helpfull answers is nice, but saying thanks can be even nicer.
modified on Friday, May 23, 2008 12:41 PM
-
Balboos wrote:
I will say this - it's very interesting and (to me) a sound warning that C#'s behaviour can be (apparently) unpredictable.
It's hardly unpredictable if it's specified:
14.5.9 Postfix increment and decrement operators
…
The run-time processing of a postfix increment or decrement operation of the form x++ or x-- consists of
the following steps:
If x is classified as a variable:
a) x is evaluated to produce the variable.
b) The value of x is saved.
c) The saved value of x is converted to the operand type of the selected operator and the operator is invoked with this value as its argument.
d) The value returned by the operator is converted to the type of x and stored in the location given by the evaluation of x.
e) The saved value of x becomes the result of the operation.Brady Kelly wrote:
The saved value of x becomes the result of the operation.
Except, apparently, in C# under the condition of evaluating: i = i++; The description, above, describes what it should do. The value saved (b) is, as I understand it, replaced by (d) via (e). That's why i's final value evaluates to 1 in C++ ; - and - why I hypothesize that the as the entire statement is evaluated, its value is stored in a scratch area. This is restored just before execution of the next statement. In the case of that rather odd code example, it overwrites the (currently) correct value with old data. Then LHS is updated with dated data after its business should already be complete and is (potentially) incorrect. This begs another question - is giving a potential employee a trick question, where the 'correct' answer varies by language, and in C#, is arguably wrong - a good question? while(weekend) enjoy++;
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadol -
Hi all, What would you consider a good question to ask to a potential programmer to see whether he/she is worth their salt? Thanks in advance for any suggestions Regards,
The only programmers that are better that C programmers are those who code in 1's and 0's :bob: :)Programm3r My Blog: ^_^
My favourite exercise for a programmer is to write code to calculate a (ten-pin) bowling score from a sequence. E.g. 91XX729-81XX9-91XX Demonstrates to me a number of things: - understanding a moderately complex, but well defined, algorithm - need to backtrack - coding style - ability to optimize - error checking of data If they get stuck, the input format can be made a bit clearer: E.g. 9 1 X X 7 2 9 0 8 1 X X 9 0 9 1 X X It's slightly amazing to me that some people, who claim to understand the rules (and can talk through it) can't even start to implement this in code! Bonus points for someone who writes unit-tests, a quick and dirty "GUI" (even if it's through the console). More bonus points if they can implement an interactive version that can compute the "best now attainable" score. I tend to time limit the task, but that is long enough to be an insightful period! I just wish I could get my company to formally adopt "testing" in _all_ interviews for programming positions - I'm sick of programmers employed by others getting drafted into my project that wouldn't have made my cut!
Regards, Ray
-
Brady Kelly wrote:
The saved value of x becomes the result of the operation.
Except, apparently, in C# under the condition of evaluating: i = i++; The description, above, describes what it should do. The value saved (b) is, as I understand it, replaced by (d) via (e). That's why i's final value evaluates to 1 in C++ ; - and - why I hypothesize that the as the entire statement is evaluated, its value is stored in a scratch area. This is restored just before execution of the next statement. In the case of that rather odd code example, it overwrites the (currently) correct value with old data. Then LHS is updated with dated data after its business should already be complete and is (potentially) incorrect. This begs another question - is giving a potential employee a trick question, where the 'correct' answer varies by language, and in C#, is arguably wrong - a good question? while(weekend) enjoy++;
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadolBalboos wrote:
Except, apparently, in C# under the condition of evaluating: i = i++;
No, C# does exactly what the spec quoted states it will do. The RHS is 'the result of the operation'. In (b), the value of x is saved. In (e) the saved value becomes the result of the operation, i.e. the RHS is the saved value of x, not the value of x after (c) and (d) occur. While you can disagree with asking such an apparently idiosyncratic implementation of ++, you cannot seriously assert that any valid implementation of C# would flagrantly disregard this part of the spec. I would never seriously ask anyone this question in an interview, and I should have used a joke icon, but if somebody told me they knew C# inside out, I would bring something like this up.
-
Balboos wrote:
Except, apparently, in C# under the condition of evaluating: i = i++;
No, C# does exactly what the spec quoted states it will do. The RHS is 'the result of the operation'. In (b), the value of x is saved. In (e) the saved value becomes the result of the operation, i.e. the RHS is the saved value of x, not the value of x after (c) and (d) occur. While you can disagree with asking such an apparently idiosyncratic implementation of ++, you cannot seriously assert that any valid implementation of C# would flagrantly disregard this part of the spec. I would never seriously ask anyone this question in an interview, and I should have used a joke icon, but if somebody told me they knew C# inside out, I would bring something like this up.
I'm not arguing with you* - just the idiosyncratic idiotic implimentation of the ++ operator. Although I realize I'm being redundant, a reasonable person would expect the value of i to be incremented after applying ++ . This is a standards driven gotcha!. My contention is (as it plays out in C++, for example) that the implementation rules should be changed for consistancy across all possible known cases. Perhaps no-one checked for this possibility when writing the standard - [and indeed, who would]- but if unexpected behavior results from [is discovered for] a standard (and you can argue it is not unexpected according to the language standard) than the standard should be re-evaluated. Basically, the operation will work, with i being incremented, unless that one scenario you gave is invoked, on which occasion, it is not incremented - at least pragmatically not incremented. It's reminds me of the mathematical proof of 1 = 2, which is perfectly logical in all its operations, but underlying it is division by zero. In this case, the gotcha is done by the rules of memory management for the operator ++ - except, despite its eccentric behavior, has not yet been called on the carpet for an accounting of itself. *How can I? The result in C# is, indeed, 0.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadol -
I'm not arguing with you* - just the idiosyncratic idiotic implimentation of the ++ operator. Although I realize I'm being redundant, a reasonable person would expect the value of i to be incremented after applying ++ . This is a standards driven gotcha!. My contention is (as it plays out in C++, for example) that the implementation rules should be changed for consistancy across all possible known cases. Perhaps no-one checked for this possibility when writing the standard - [and indeed, who would]- but if unexpected behavior results from [is discovered for] a standard (and you can argue it is not unexpected according to the language standard) than the standard should be re-evaluated. Basically, the operation will work, with i being incremented, unless that one scenario you gave is invoked, on which occasion, it is not incremented - at least pragmatically not incremented. It's reminds me of the mathematical proof of 1 = 2, which is perfectly logical in all its operations, but underlying it is division by zero. In this case, the gotcha is done by the rules of memory management for the operator ++ - except, despite its eccentric behavior, has not yet been called on the carpet for an accounting of itself. *How can I? The result in C# is, indeed, 0.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadolBalboos wrote:
unless that one scenario you gave is invoked
I didn't give that one scenario for nothing! ;)
Semicolons: The number one seller of ostomy bags world wide. - dan neely
-
I'm not arguing with you* - just the idiosyncratic idiotic implimentation of the ++ operator. Although I realize I'm being redundant, a reasonable person would expect the value of i to be incremented after applying ++ . This is a standards driven gotcha!. My contention is (as it plays out in C++, for example) that the implementation rules should be changed for consistancy across all possible known cases. Perhaps no-one checked for this possibility when writing the standard - [and indeed, who would]- but if unexpected behavior results from [is discovered for] a standard (and you can argue it is not unexpected according to the language standard) than the standard should be re-evaluated. Basically, the operation will work, with i being incremented, unless that one scenario you gave is invoked, on which occasion, it is not incremented - at least pragmatically not incremented. It's reminds me of the mathematical proof of 1 = 2, which is perfectly logical in all its operations, but underlying it is division by zero. In this case, the gotcha is done by the rules of memory management for the operator ++ - except, despite its eccentric behavior, has not yet been called on the carpet for an accounting of itself. *How can I? The result in C# is, indeed, 0.
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadolBalboos wrote:
Although I realize I'm being redundant, a reasonable person would expect the value of i to be incremented after applying ++ . This is a standards driven gotcha!.
I wouldn't call it that. The C standard was designed to allow for efficient implementation on a wide variety of architectures. On some architectures, it would be most efficient to perform an increment immediately. On others, it would be more efficient to defer the increment until after the assignment. The C standard explicitly specifies that while implementations must perform all actions before a sequence point before doing any actions after, operations between sequence points may be performed in almost any order that is logically possible. There are a few guarantees, but not many. I would expect that the explicit definition of behavior under C# was driven by a desire to avoid undefined behaviors. It's not really possible to detect all places where a program modifies rvalues in places where it "shouldn't"; the only practical way to avoid having such things cause undefined behavior is to define precisely what they will do. I can't think of any sensible language definition that would require the "++" to take place after the assignment that wouldn't be full of lots of confusing cases. Requiring that it take place before the assignment allows for consistent behavior in all cases. In some cases, however, it forces implementations to be less efficient than they otherwise might be. For example, requiring that a statement like "*p = (*q)++;" must latch the value of *q, then increment it, and then write the latched value to *p will result in sub-optimal code on many architectures. The C and C++ standards would allow the compiler to optimize the code for that situation without requiring any particular outcome if p==q.
-
Balboos wrote:
Although I realize I'm being redundant, a reasonable person would expect the value of i to be incremented after applying ++ . This is a standards driven gotcha!.
I wouldn't call it that. The C standard was designed to allow for efficient implementation on a wide variety of architectures. On some architectures, it would be most efficient to perform an increment immediately. On others, it would be more efficient to defer the increment until after the assignment. The C standard explicitly specifies that while implementations must perform all actions before a sequence point before doing any actions after, operations between sequence points may be performed in almost any order that is logically possible. There are a few guarantees, but not many. I would expect that the explicit definition of behavior under C# was driven by a desire to avoid undefined behaviors. It's not really possible to detect all places where a program modifies rvalues in places where it "shouldn't"; the only practical way to avoid having such things cause undefined behavior is to define precisely what they will do. I can't think of any sensible language definition that would require the "++" to take place after the assignment that wouldn't be full of lots of confusing cases. Requiring that it take place before the assignment allows for consistent behavior in all cases. In some cases, however, it forces implementations to be less efficient than they otherwise might be. For example, requiring that a statement like "*p = (*q)++;" must latch the value of *q, then increment it, and then write the latched value to *p will result in sub-optimal code on many architectures. The C and C++ standards would allow the compiler to optimize the code for that situation without requiring any particular outcome if p==q.
Let me recast my point of view so that you’ll see why I dislike the behavior Chickens and Eggs Which was created first: The ++ operator or it’s method of implementation? Without any references, I’d be willing to bet that the operator came first – the standards on how it should be implemented then followed, and apparently in language-dependent incarnations (and, if I understood you correctly, some compiler dependence). My opinion, as restated, is that the operator’s purpose was to cause the value of its target object to be increased in value by one unit. If an implementation standard causes this to behave otherwise, it has subverted the concept of the ++ operator). The tail is wagging the dog. The operator's effect is expected to be,
for any i, i++ will ultimately result in i = i+1
. if we use the operator in a less excentric manner:
int i = 0;
j = i++;then we’d get ‘my’ expected results, which is i=1 . In C#, we have the cavaet that j must not be the same address as i. Now, if an an arthimetic operation gives you a certain result ALMOST all the time - can you really trust it? You could, of course, argue that 'this is the same result all the time' based upon the specifications - but this goes back to my assertion that the behavior of the operator should drive the standards - and not vis-versa - even in very weird cases.
I sort of feel like I'm morally and ethically correct, but the law is squarely on your side. (for now!)
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadol -
Let me recast my point of view so that you’ll see why I dislike the behavior Chickens and Eggs Which was created first: The ++ operator or it’s method of implementation? Without any references, I’d be willing to bet that the operator came first – the standards on how it should be implemented then followed, and apparently in language-dependent incarnations (and, if I understood you correctly, some compiler dependence). My opinion, as restated, is that the operator’s purpose was to cause the value of its target object to be increased in value by one unit. If an implementation standard causes this to behave otherwise, it has subverted the concept of the ++ operator). The tail is wagging the dog. The operator's effect is expected to be,
for any i, i++ will ultimately result in i = i+1
. if we use the operator in a less excentric manner:
int i = 0;
j = i++;then we’d get ‘my’ expected results, which is i=1 . In C#, we have the cavaet that j must not be the same address as i. Now, if an an arthimetic operation gives you a certain result ALMOST all the time - can you really trust it? You could, of course, argue that 'this is the same result all the time' based upon the specifications - but this goes back to my assertion that the behavior of the operator should drive the standards - and not vis-versa - even in very weird cases.
I sort of feel like I'm morally and ethically correct, but the law is squarely on your side. (for now!)
"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadol -
Balboos wrote:
but this goes back to my assertion that the behavior of the operator should drive the standards - and not vis-versa - even in very weird cases.
So how would you like to see the order of operations defined? Be very specific.
Not intended as a cop-out, but I don't want to concern myself with the order of storage and retrieval when I'm coding - unless I'm doing inline assembler. All I want to be sure of is that if I have
i++
in a statement then the value ofi
will always have increased by 1 by the next statement.* I want this to be as sure as if I had codedi = i+1
; between the statements** The rest of the details I defer to those who specialize in compiler design. * Clearly I'm not talking aboutf(i++)
, &etc. whereini
may be effected somehow inf()
** Unless that, too, fails on special occasions in C# (cynical interlude)"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadol -
Not intended as a cop-out, but I don't want to concern myself with the order of storage and retrieval when I'm coding - unless I'm doing inline assembler. All I want to be sure of is that if I have
i++
in a statement then the value ofi
will always have increased by 1 by the next statement.* I want this to be as sure as if I had codedi = i+1
; between the statements** The rest of the details I defer to those who specialize in compiler design. * Clearly I'm not talking aboutf(i++)
, &etc. whereini
may be effected somehow inf()
** Unless that, too, fails on special occasions in C# (cynical interlude)"The difference between genius and stupidity is that genius has its limits." - Albert Einstein
"How do you find out if you're unwanted if everyone you try to ask tells you to go away?" - Balboos HaGadolBalboos wrote:
All I want to be sure of is that if I have i++ in a statement then the value of i will always have increased by 1 by the next statement.*
It will have increased by one before the next statement. In C#, the assignment will then reassign the old value, but the variable will nonetheless have (briefly) been increased by one. There is no reasonable way to define the semantics so that the assignment statement will write anything other than the old value of i; I also see no reasonable way to define the semantics to guarantee that the increment will happen after the assignment. Certainly in a statement like: "i = i++ ? 0 : 1;" it is guaranteed that the increment must happen before the assignment. Is there any reason why code should ever do something like "i = i++;"? As far as I'm concerned, the only reason any programmer should ever care about the effects of such a statement would be because of a desire to eliminate all non-deterministic behavior (such interests are not always merely academic; they can be useful when testing a variety of implementations for consistency; if two versions of a compiler generate identically-behaving code from a million-line morass of weird goofy semi-random nonsense, that would tend to suggest that the later version didn't pick up any bugs not present in the former. If the compiler output were allowed to behave non-deterministically in some cases, the program that generates the test code would have to be written to avoid those cases, and may thus also be more prone to miss some weird boundary cases that should be deterministic even though they're similar to non-deterministic cases.