History of compiler optimization: function inlining
-
Compilers can do some kinds of optimizations, and even the first fortran compiler included that capability. Function calls mean a little performance hit, so inlining of functions is a possible optimization. The C# compiler can do that, and in case it misses such a possibility, you may use the
AggressiveInlining
attribute. I'd like to know when the first compiler with function inlining capability was introduced, and how common it is nowadays beyond the .Net world.Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
-
Compilers can do some kinds of optimizations, and even the first fortran compiler included that capability. Function calls mean a little performance hit, so inlining of functions is a possible optimization. The C# compiler can do that, and in case it misses such a possibility, you may use the
AggressiveInlining
attribute. I'd like to know when the first compiler with function inlining capability was introduced, and how common it is nowadays beyond the .Net world.Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
Pretty sure inlining is wide-spread, it has been around for decades before .net.
-
Compilers can do some kinds of optimizations, and even the first fortran compiler included that capability. Function calls mean a little performance hit, so inlining of functions is a possible optimization. The C# compiler can do that, and in case it misses such a possibility, you may use the
AggressiveInlining
attribute. I'd like to know when the first compiler with function inlining capability was introduced, and how common it is nowadays beyond the .Net world.Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
It was certainly in C++ in the early nineties - there are books from that period which describe it: The Advanced C++ Book, Skinner, M. T. (1992). Silicon Press. ISBN 978-0-929306-10-0.[^] And I recall an
inline
keyword in my C compiler from the eighties, though it wasn;t added to the C spec until C99. I know that many compilers do inlining without being prompted (Java for example) but ... it depends on the module type and the function visibility to an extent. If the function is visible outside the module, it's harder to inline as it can't be called from an external app unless it exists as a "proper" function / method."I have no idea what I did, but I'm taking full credit for it." - ThisOldTony AntiTwitter: @DalekDave is now a follower!
-
Compilers can do some kinds of optimizations, and even the first fortran compiler included that capability. Function calls mean a little performance hit, so inlining of functions is a possible optimization. The C# compiler can do that, and in case it misses such a possibility, you may use the
AggressiveInlining
attribute. I'd like to know when the first compiler with function inlining capability was introduced, and how common it is nowadays beyond the .Net world.Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
By 1984 I was using a "globally optimising" Fortran Compiler at Perkin-Elmer (nee Interdata, later Concurrent). One of its tricks was function inlining fairly early in the compilation process, so redundant code could be stripped and register allocation done smarter. It was fairly novel for its time and in its application space ("superminis"), but I'm pretty sure it wasn't the first. Cheers, Peter
Software rusts. Simon Stephenson, ca 1994. So does this signature. me, 2012
-
Compilers can do some kinds of optimizations, and even the first fortran compiler included that capability. Function calls mean a little performance hit, so inlining of functions is a possible optimization. The C# compiler can do that, and in case it misses such a possibility, you may use the
AggressiveInlining
attribute. I'd like to know when the first compiler with function inlining capability was introduced, and how common it is nowadays beyond the .Net world.Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
-
Compilers can do some kinds of optimizations, and even the first fortran compiler included that capability. Function calls mean a little performance hit, so inlining of functions is a possible optimization. The C# compiler can do that, and in case it misses such a possibility, you may use the
AggressiveInlining
attribute. I'd like to know when the first compiler with function inlining capability was introduced, and how common it is nowadays beyond the .Net world.Oh sanctissimi Wilhelmus, Theodorus, et Fredericus!
Inlining is ages old, but in the old days, there were compilers / debuggers that couldn't handle it properly. If you wanted to step line by line through an inlined function, you had to turn off that feature while debugging. Maybe this applied only to a few compilers. I don't know if the limitation was in the compiler, the debugger or the debug format - it could be either. There were other optimzing features that you had to turn off while debugging. E.g. an optimizer may detect that you are doing the same calculations twice, arguments being unchanged from the first to the second calculation. So it decides to rather store the first result in a register or temporary variable, and skip the second calculation. Now you set a breakpoint in the middle of second calculation (but not in the first) - but there is no code where the breakpoint can be inserted! It takes some juggling of instructions and breakpoint analysis to handle such situations. A (true) story from the old days - not directly connected to optimization, but illustrating simlar issues: This debugger could single step at the function call level (which was more useful than you think - I wish we had it in moder debuggers!), or at the source line level. There was a fatal crash in this 2000 lines(!) function when stepping call by call. When line-level stepping was enabled, the code worked perfectly fine. It took some effort to discover why, and it illustrates the issues when making a debugger: In line mode, the debugger replaces the first instruction generated by each source line with at BPT instruction, saving the original instruction in its own buffer. When the BPT is reached, the original instruction is re-inserted into the code, the PC decremented to execute the same address again (this time with the original instruction rather than the BPT), the cpu is told to single step at the machine instruction level, and the BPT is re-inserted for the next time this point is reached. As so often is the case: The culprit was a wild pointer, writing into code memory (the machine did not have separate spaces for code and data), overwriting an instruction. But in line mode, the debugger had a saved copy of the instruction, protected from the wild pointer. So the debugger replaced the destroyed instruction with the correct one; that is why the code didn't crash in line stepping mode.