What strikes me, is that if developers (even experienced ones) are still making this "mistake", then it indicates a conflict in how developers think versus how the compiler "thinks". In other words, it doesn't necessarily mean that the developer doesn't know what they're doing. When I design and write methods, etc., I usually assume that the object I'm working with is alive and non-null, yet later on I may set a certain property or field to null because it no longer applies. If it doesn't apply, then naturally neither does any operation that would affect that object, and therefore even if I didn't originally intend or expect to set that property or field to null, the fact that I later decide to should not break my existing code that assumes it is alive! Just to give a concrete example of how this might be the case, suppose I have some logging window that keeps track of debugging output while I'm running my program, and throughout the development process this window is always open, or at least exists and is maybe invisible. But when I start my optimizing phase, I no longer want to keep that debug window open all the time, consuming resources the end user doesn't need. However, in my code, I originally assumed it was alive, and did something like this: myApp.DebugWindow.WriteLine( "foo" ); Sure you could argue (and many will) that this is poor programming and I should in fact precede every statement with "if ( myApp.DebugWindow != null )" but frankly in my opinion this is just additional work that needs to be done that doesn't really affect the intended behavior of the program. If I set myApp.DebugWindow to null, then at least when running in release mode, any null reference exceptions should simply be absorbed by the runtime. There is no excuse to crash the whole application for something as silly as forgetting to prefix a statement with an "!= null" check first. Granted, there are ways one could do this, by installing a handler and checking for null reference exceptions, and explicitly setting them to handled (in fact that might be a good practice in general), but again this is more work and the whole point here is to make the language much friendlier with imperfectly-specified programs. I know a lot of people will disagree with me on this one; but I'm all about software working as well as it possibly can, and not crashing and disrupting the user for things that aren't important. Everyone strives for "perfect" software, but it shouldn't be a requirement of the language.