The Illusion of a Choice
-
A coworker of mine found this lovely little gem in .NET 2.0 CF. From the reflected code of ManualResetEvent.WaitOne:
public override bool WaitOne(int millisecondsTimeout, bool exitContext)
{
if ((millisecondsTimeout < 0) && (millisecondsTimeout != -1))
{
throw new ArgumentOutOfRangeException("millisecondsTimeout");
}
if (exitContext)
{
throw new ArgumentException(null, "exitContext");
}
int num = PAL.Threading_Event_WaitTimeout(this.Handle, millisecondsTimeout);
if (num == -2147483623)
{
return false;
}
return base.CheckResultInternal(num == 0);
}Microsoft only gives you the illusion of having a choice for exitContext.
Pay no attention to that code behind the interface.
-
It would just seem since all the classes that derive from WaitHandle all throw an exception when exitContext is true that Microsoft would just remove that parameter entirely. In the Compact Framework, Microsoft was notorious for changing/removing signatures to suit their needs. I didn't see why they couldn't do so as well here. It would surely make the code less horrific.
Andrew Rissing wrote:
It would just seem since all the classes that derive from WaitHandle all throw an exception when exitContext is true that Microsoft would just remove that parameter entirely.
In general, the real question should be whether some future derived object might want to do something with that parameter. It's not unreasonable for some overridable functions to e.g. take a parameter of type Object even if none of the initial implementations make any use of it. The situation may in someways be equivalent to some gvprintf() implementations which accept a function pointer to a character-output function that takes a char and a void pointer, along with a void pointer which will be passed to the character-output function. The gvprintf() function never does anything with the void pointer but pass it on, but implementations of fprintf, sprintf, etc. use the pointer to pass a reference to the output file or destination string. In this case, my thinking is that Microsoft was thinking they might someday support exiting a Monitor context while within a WaitHandle's wait routine; having the parameter exist would allow them to add such support without breaking existing code, at least in theory. Of course, if code that holds a lock calls some other code that makes use of ExitContext and the code holding the lock isn't expecting that, problems could easily ensue.
-
A coworker of mine found this lovely little gem in .NET 2.0 CF. From the reflected code of ManualResetEvent.WaitOne:
public override bool WaitOne(int millisecondsTimeout, bool exitContext)
{
if ((millisecondsTimeout < 0) && (millisecondsTimeout != -1))
{
throw new ArgumentOutOfRangeException("millisecondsTimeout");
}
if (exitContext)
{
throw new ArgumentException(null, "exitContext");
}
int num = PAL.Threading_Event_WaitTimeout(this.Handle, millisecondsTimeout);
if (num == -2147483623)
{
return false;
}
return base.CheckResultInternal(num == 0);
}Microsoft only gives you the illusion of having a choice for exitContext.
While the exitContext bit is indeed a bit icky, I find this line upsetting as well -
if ((millisecondsTimeout < 0) && (millisecondsTimeout != -1))
That should probably be replaced with the functionally equivalent (unless I'm mistaken) and more efficient
if(millisecondsTimeout < -1)
-
While the exitContext bit is indeed a bit icky, I find this line upsetting as well -
if ((millisecondsTimeout < 0) && (millisecondsTimeout != -1))
That should probably be replaced with the functionally equivalent (unless I'm mistaken) and more efficient
if(millisecondsTimeout < -1)
I don't have a problem with the original version. In fact I think it's better because it shows the intent more clearly i.e. -1 is treated differently to other negative values. :) I'd use the (probably) more efficient version if performance measurements showed it was worthwhile. :-D
Regards David R --------------------------------------------------------------- "Every program eventually becomes rococo, and then rubble." - Alan Perlis
-
While the exitContext bit is indeed a bit icky, I find this line upsetting as well -
if ((millisecondsTimeout < 0) && (millisecondsTimeout != -1))
That should probably be replaced with the functionally equivalent (unless I'm mistaken) and more efficient
if(millisecondsTimeout < -1)
Don't forget that you are looking at reverse engineered code here. The original source code probably didn't say -1 but was:
&& (millisecondsTimeout != Timeout.Infinite)
which avoids hard-coding a magic number into the source code and makes the intent clearer. The compiler will inline this constant value and replace it with -1 so when you reverse engineer it you lose the reference to Timeout.Infinite.
-
A coworker of mine found this lovely little gem in .NET 2.0 CF. From the reflected code of ManualResetEvent.WaitOne:
public override bool WaitOne(int millisecondsTimeout, bool exitContext)
{
if ((millisecondsTimeout < 0) && (millisecondsTimeout != -1))
{
throw new ArgumentOutOfRangeException("millisecondsTimeout");
}
if (exitContext)
{
throw new ArgumentException(null, "exitContext");
}
int num = PAL.Threading_Event_WaitTimeout(this.Handle, millisecondsTimeout);
if (num == -2147483623)
{
return false;
}
return base.CheckResultInternal(num == 0);
}Microsoft only gives you the illusion of having a choice for exitContext.
The compact framework does not support synchronization contexts, and so it is incorrect to set exitContext to true, since the CF cannot do what you are asking it to do in this case. The ManualResetEvent interface still has the parameter on this method to support migrated code. Any code which passes false to this method will migrate and run on the CF without needing any rework. Any code which passes true is relying on support for synchronization contexts, and so will need to be reworked before it can be migrated to the CF.
-
The compact framework does not support synchronization contexts, and so it is incorrect to set exitContext to true, since the CF cannot do what you are asking it to do in this case. The ManualResetEvent interface still has the parameter on this method to support migrated code. Any code which passes false to this method will migrate and run on the CF without needing any rework. Any code which passes true is relying on support for synchronization contexts, and so will need to be reworked before it can be migrated to the CF.
The compact framework does not support synchronization contexts, and so it is incorrect to set exitContext to true, since the CF cannot do what you are asking it to do in this case. Does the CF not support synchronization contexts at all? My understanding is that the exitContext parameter says to exit any existing synchronization context before waiting, and reacquire it afterward. If no such context can exist, why should a True value of exitContext pose a problem? Code which relies upon synchronization contexts would fail when an attempt was made to create one; if no synchronization context exists, exiting all synchronization contexts should pose no problem. BTW, with regard to checking for negative values and then for -1, many processors can check for negative numbers more quickly than they can perform general comparisons. The two-part test may save time in the more common case even if it wastes time in the -1 case. Given the likelihood that the -1 was a defined constant, I think that part of the code is reasonable.
-
Andrew Rissing wrote:
It would just seem since all the classes that derive from WaitHandle all throw an exception when exitContext is true that Microsoft would just remove that parameter entirely.
In general, the real question should be whether some future derived object might want to do something with that parameter. It's not unreasonable for some overridable functions to e.g. take a parameter of type Object even if none of the initial implementations make any use of it. The situation may in someways be equivalent to some gvprintf() implementations which accept a function pointer to a character-output function that takes a char and a void pointer, along with a void pointer which will be passed to the character-output function. The gvprintf() function never does anything with the void pointer but pass it on, but implementations of fprintf, sprintf, etc. use the pointer to pass a reference to the output file or destination string. In this case, my thinking is that Microsoft was thinking they might someday support exiting a Monitor context while within a WaitHandle's wait routine; having the parameter exist would allow them to add such support without breaking existing code, at least in theory. Of course, if code that holds a lock calls some other code that makes use of ExitContext and the code holding the lock isn't expecting that, problems could easily ensue.
Why not introduce the additional overload when there actually exists at least one implementation of it? First we'd have this situation:
public abstract class AbstractThing
{
abstract public void Foo(MyOtherThing t);
}public class ConcreteThing1 : AbstractThing
{
override public void Foo(MyOtherThing t) { ... }
}Then we'd introduce the additional overload, defaulting to the previously defined behavior when our new exitContext parameter is false:
public abstract class AbstractThing
{
abstract public void Foo(MyOtherThing t);virtual public void Foo(MyOtherThing t, bool exitContext)
{
if (exitContext)
throw new NotSupportedException(); //TODO: message ;-)Foo(t);
}
}public class ConcreteThing1 : AbstractThing
{
override public void Foo(MyOtherThing t) { ... }
}At least in this particular case this ought to work fine, and it doesn't break and descendant classes since it doesn't introduce any new abstract members.
-
I don't have a problem with the original version. In fact I think it's better because it shows the intent more clearly i.e. -1 is treated differently to other negative values. :) I'd use the (probably) more efficient version if performance measurements showed it was worthwhile. :-D
Regards David R --------------------------------------------------------------- "Every program eventually becomes rococo, and then rubble." - Alan Perlis
And I thought the compiler optimization would have caught that one so we'd have the best of both worlds - the more expressive code in our source files, the more efficient code in our assemblies. Apparently it did not, since this source came from disassembly.
-
And I thought the compiler optimization would have caught that one so we'd have the best of both worlds - the more expressive code in our source files, the more efficient code in our assemblies. Apparently it did not, since this source came from disassembly.
dojohansen wrote:
And I thought the compiler optimization would have caught that one
I have no idea what the instruction timings would be like on mega-pipelined processors, but certainly on older machines one could test whether a value was less than zero (i.e. had the high bit set) more quickly than one could test for its being in a particular range. The two-part test could be faster for the case where the value is positive, even if it would be slower for the case where it is negative. If the positive case will occur much more often, eliminating the split test will make the code smaller but could cause it to run more slowly.