about inline asm jump a address table.
-
while (...) { case 8: .... case 7: .... case 6: .... ...... }
VC compile optimize: ... 00007 83 e8 01 sub eax, 1 0000a 74 16 je SHORT $LN2@TestFun1 0000c 83 e8 01 sub eax, 1 0000f 74 0b je SHORT $LN3@TestFun1 00011 83 e8 01 sub eax, 1 00014 75 1b jne SHORT $LN1@TestFun1 ... how to let vc don't optimize like that,like nether. jmp DWORD PTR $LN17@TestFun1[esi*4] ....... $LN17@TestFun1: DD $LN10@TestFun1 DD $LN1@TestFun1 DD $LN2@TestFun1 DD $LN3@TestFun1 DD $LN4@TestFun1 DD $LN5@TestFun1 DD $LN6@TestFun1 DD $LN7@TestFun1 ------------------------------------------------------------------ or how to write inline asm jump a address table. but in inline asm can't use DD. Thanks. -
while (...) { case 8: .... case 7: .... case 6: .... ...... }
VC compile optimize: ... 00007 83 e8 01 sub eax, 1 0000a 74 16 je SHORT $LN2@TestFun1 0000c 83 e8 01 sub eax, 1 0000f 74 0b je SHORT $LN3@TestFun1 00011 83 e8 01 sub eax, 1 00014 75 1b jne SHORT $LN1@TestFun1 ... how to let vc don't optimize like that,like nether. jmp DWORD PTR $LN17@TestFun1[esi*4] ....... $LN17@TestFun1: DD $LN10@TestFun1 DD $LN1@TestFun1 DD $LN2@TestFun1 DD $LN3@TestFun1 DD $LN4@TestFun1 DD $LN5@TestFun1 DD $LN6@TestFun1 DD $LN7@TestFun1 ------------------------------------------------------------------ or how to write inline asm jump a address table. but in inline asm can't use DD. Thanks.Visual C++ will compile whatever it thinks is most effective dependent on the compiler options you have selected. I typically select /Oxs (Minimize Size) as it tends to produce a smaller binary. In many circumstances this will actually run faster as more code fits into the processor's caches and less paging typically happens. Second-guessing the compiler often leads to worse performance. It's important to be aware of the differences is both bandwidth and latency between different types of memory in a modern computer system, when evaluating different choices in optimization. See Herb Sutter's Machine Architecture[^] presentation for the Northwest C++ User's Group. These kinds of micro-optimizations are for times when you've already eliminated any possible gains you can get from improving your use of data structures and algorithms, and already made your data structures as cache-efficient as you can, so your program isn't stalling due to CPU wait-states. Generally you'll find more improvement by improving one of those areas instead.
DoEvents: Generating unexpected recursion since 1991