Even if you limit Moore's Law to CPU performance, as indicated by specific CPU tests, there are at least two strong reasons for worrying about the performance of an underlying library (or, worse, OS) code: First, Moore's Law tends to not apply to code that isn't going to be strongly cacheable. This covers code, and especially data, that is used in a relatively random manner, such as a library of functions or operations on disparate structures. Furthermore, hardware exception processing or level change only make matters worse. I'd think that MFC/ATL on down fall under such conditions. (There's a ten year old ACM ASPLOS paper titled something like "Why aren't operating systems getting faster as fast as CPUs?" that discusses this at length. Sorry, it's in my library somewhere, but I've not had a chance to look it up...) Second, the effect of Moore's Law has been to enable applications that couldn't be either built, or at least built econonmically. And the performance of these applications is a selling point. If the underlying system provides relatively poor performance, then those applications will avoid them -- either by building their own, or by moving to a platform that provides the performance advantage they commercially need. I'd have thought Microsoft's goals would have included ensuring that such applications stayed on their platforms. By implication, MFC/ATL, and below, should be worried about absolute performance. How much is a business decision that is internal to Microsoft, of course. Jim Johnson These opinions are my own.