Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. The Lounge
  3. An Idea for Optimizing Class Methods

An Idea for Optimizing Class Methods

Scheduled Pinned Locked Moved The Lounge
code-reviewbeta-testingperformancetutoriallearning
26 Posts 15 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Jacob F W

    First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

    L Offline
    L Offline
    Lost User
    wrote on last edited by
    #3

    Jacob F. W. wrote:

    Some optimizing compilers in the past have optimized away (N << 6) >> 6

    Name & Shame please. Good god what a rookie mistake. Made all the worse by that being a common idiom for resetting some upper bits. (and for sign-extending sub-word fields, if N is signed) Anyway this could be interesting, have you worked it out a little? Proposed a syntax? Made a test implementation?

    J J 2 Replies Last reply
    0
    • J Jacob F W

      First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

      A Offline
      A Offline
      AspDotNetDev
      wrote on last edited by
      #4

      I like it! In theory, the compiler should be able to optimize inverse relations, though that is sometimes not possible (e.g., if it is based on data loaded at runtime). You could actually implement something like this in C# using LINQ. You could add your own operations, then when it comes time to evaluate the result, you get to prune the expression tree when inverse relations are applied. I haven't tried that myself, but I think that's about how that could work. Though, I doubt such an approach would become common unless it were built into the compiler and there were plentiful code samples to make it stupidly easy.

      Thou mewling ill-breeding pignut!

      J 1 Reply Last reply
      0
      • J Jacob F W

        First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

        F Offline
        F Offline
        Forogar
        wrote on last edited by
        #5

        Quote:

        Some optimizing compilers in the past have optimized away (N << 6) >> 6

        This isn't new. Way back when dinosaurs roamed the earth and I was helping out at my college's computer center we were speed-testing a selection of new, optimising COBOL compilers. One of them appeared to be producing amazingly fast code after our first test but when we examined the resultant machine-code it only basically said "RET". The compiler had optimised away everything in a subroutine that was just a test time-waster because it worked out that if there wasn't any input data and there wasn't any output data, then there wasn't any need to actually do anything and it ignored the code altogether! Kudos to the (perhaps misguided) genius who wrote this optimisation to recognise such a situation but it meant we had to change a lot of our testing code to make a valid test! It makes we wonder how often code ends up doing nothing but still remains in place - and how much the aforementioned genius came across this situation to make them think of optimising it away?

        - I would love to change the world, but they won’t give me the source code.

        L J 2 Replies Last reply
        0
        • F Forogar

          Quote:

          Some optimizing compilers in the past have optimized away (N << 6) >> 6

          This isn't new. Way back when dinosaurs roamed the earth and I was helping out at my college's computer center we were speed-testing a selection of new, optimising COBOL compilers. One of them appeared to be producing amazingly fast code after our first test but when we examined the resultant machine-code it only basically said "RET". The compiler had optimised away everything in a subroutine that was just a test time-waster because it worked out that if there wasn't any input data and there wasn't any output data, then there wasn't any need to actually do anything and it ignored the code altogether! Kudos to the (perhaps misguided) genius who wrote this optimisation to recognise such a situation but it meant we had to change a lot of our testing code to make a valid test! It makes we wonder how often code ends up doing nothing but still remains in place - and how much the aforementioned genius came across this situation to make them think of optimising it away?

          - I would love to change the world, but they won’t give me the source code.

          L Offline
          L Offline
          Lost User
          wrote on last edited by
          #6

          There's a difference though. Eliminating actually dead code is nice (if sometimes annoying when trying to benchmark something), eliminating "(N << 6) >> 6" is simply incorrect.

          1 Reply Last reply
          0
          • J Jacob F W

            First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

            C Offline
            C Offline
            Chris Maunder
            wrote on last edited by
            #7

            Making this threadsafe would be a challenge. Maybe start with compiler messages instead of actual optimisation?

            cheers, Chris Maunder The Code Project | Co-founder Microsoft C++ MVP

            J 1 Reply Last reply
            0
            • C Chris Maunder

              Making this threadsafe would be a challenge. Maybe start with compiler messages instead of actual optimisation?

              cheers, Chris Maunder The Code Project | Co-founder Microsoft C++ MVP

              J Offline
              J Offline
              Jacob F W
              wrote on last edited by
              #8

              True, but the idea is not to apply the optimizations everywhere, but to alert the compiler to this relationship, so that it can recognize the situations where the optimizations could be made. Few optimizations work everywhere: context usually plays a big part in determining whether or not to apply them. While possible, threads would be a situation where they optimizations could be ignored.

              1 Reply Last reply
              0
              • L Lost User

                Jacob F. W. wrote:

                Some optimizing compilers in the past have optimized away (N << 6) >> 6

                Name & Shame please. Good god what a rookie mistake. Made all the worse by that being a common idiom for resetting some upper bits. (and for sign-extending sub-word fields, if N is signed) Anyway this could be interesting, have you worked it out a little? Proposed a syntax? Made a test implementation?

                J Offline
                J Offline
                Jacob F W
                wrote on last edited by
                #9

                No, I haven't started working out a syntax or anything. I'm still on the "this seems like a neat idea" stage. :)

                1 Reply Last reply
                0
                • A AspDotNetDev

                  I like it! In theory, the compiler should be able to optimize inverse relations, though that is sometimes not possible (e.g., if it is based on data loaded at runtime). You could actually implement something like this in C# using LINQ. You could add your own operations, then when it comes time to evaluate the result, you get to prune the expression tree when inverse relations are applied. I haven't tried that myself, but I think that's about how that could work. Though, I doubt such an approach would become common unless it were built into the compiler and there were plentiful code samples to make it stupidly easy.

                  Thou mewling ill-breeding pignut!

                  J Offline
                  J Offline
                  Jacob F W
                  wrote on last edited by
                  #10

                  I have no experience with LINQ but that sounds like one way it could work.

                  1 Reply Last reply
                  0
                  • F Forogar

                    Quote:

                    Some optimizing compilers in the past have optimized away (N << 6) >> 6

                    This isn't new. Way back when dinosaurs roamed the earth and I was helping out at my college's computer center we were speed-testing a selection of new, optimising COBOL compilers. One of them appeared to be producing amazingly fast code after our first test but when we examined the resultant machine-code it only basically said "RET". The compiler had optimised away everything in a subroutine that was just a test time-waster because it worked out that if there wasn't any input data and there wasn't any output data, then there wasn't any need to actually do anything and it ignored the code altogether! Kudos to the (perhaps misguided) genius who wrote this optimisation to recognise such a situation but it meant we had to change a lot of our testing code to make a valid test! It makes we wonder how often code ends up doing nothing but still remains in place - and how much the aforementioned genius came across this situation to make them think of optimising it away?

                    - I would love to change the world, but they won’t give me the source code.

                    J Offline
                    J Offline
                    Jacob F W
                    wrote on last edited by
                    #11

                    The goal here wouldn't be to force the compiler to make the optimization everywhere it could, merely to alert the compiler that a relationship exists, that in some situations doesn't need to be applied. If the value is needed in an operation, that is a situation where the optimization can't be made.

                    // Example 1

                    func1()
                    {
                    ...
                    N = ~N;
                    func2(N);
                    return;
                    }

                    func2(MyClass P)
                    {
                    P = ~P;
                    ...
                    }

                    In Example 1, if operator ~ is defined as an inverse operator to itself, the operation wouldn't be needed.

                    // Example 2

                    func1()
                    {
                    ...
                    N = ~N;
                    func2(N);
                    Q = N * 5;

                    return;
                    

                    }

                    func2(MyClass P)
                    {
                    P = ~P;
                    ...
                    }

                    In Example 2 the operation would still need to be performed, but it could be delayed, and then only done once.

                    // Example 3

                    func1()
                    {
                    ...
                    N = ~N;
                    if(N < 20)
                    {
                    Q = N + 18;
                    }
                    else
                    {
                    Q = 0;
                    ++N;
                    }
                    func2(N);
                    return;
                    }

                    func2(MyClass P)
                    {
                    P = ~P;
                    ...
                    }

                    In Example 3, the value of N is needed between the inverse operation, and so no optimization can be made. I think most compiler's these day have fixed the issues you faced with the COBOL, or at the very least I haven't noticed it. But again, those optimizations are with the primitives. Classes are a whole other matter. The point is to provide a way for the Compiler to recognize these relationships and take advantage of them to speed things up.

                    1 Reply Last reply
                    0
                    • J Jacob F W

                      First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

                      E Offline
                      E Offline
                      Espen Harlinn
                      wrote on last edited by
                      #12

                      Since you are talking about optimizing compilers I'm assuming you're using C++. If you give the compiler a fair chance at optimizing the code, it will probably remove redundant calls to IsZero ...

                      Espen Harlinn Principal Architect, Software - Goodtech Projects & Services AS Projects promoting programming in "natural language" are intrinsically doomed to fail. Edsger W.Dijkstra

                      J 1 Reply Last reply
                      0
                      • E Espen Harlinn

                        Since you are talking about optimizing compilers I'm assuming you're using C++. If you give the compiler a fair chance at optimizing the code, it will probably remove redundant calls to IsZero ...

                        Espen Harlinn Principal Architect, Software - Goodtech Projects & Services AS Projects promoting programming in "natural language" are intrinsically doomed to fail. Edsger W.Dijkstra

                        J Offline
                        J Offline
                        Jacob F W
                        wrote on last edited by
                        #13

                        True, but we're not just talking about removing multiple, unneeded calls; we're talking about not calling the function at all. The idea is that if we're able to establish a relationship between SetToZero and IsZero, then if the compiler sees that we call SetToZero, and then later on in our own code call IsZero on the same object without modifying it inbetween calls, the compiler could replace the call with a default value.

                        E 1 Reply Last reply
                        0
                        • J Jacob F W

                          First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

                          M Offline
                          M Offline
                          Mohammed Hameed
                          wrote on last edited by
                          #14

                          Accoding to my view, these optimizations should first happen in a separate Tool rather than be integrated to a compiler. Then, over a period of time using this tool live and experimenting it with a lot and if it gives almost 99.9% accuracy then it should be integrated to the compiler.

                          http://authenticcode.com

                          J 1 Reply Last reply
                          0
                          • J Jacob F W

                            First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

                            L Offline
                            L Offline
                            L Braun
                            wrote on last edited by
                            #15

                            I am not sure how this could be implemented. If the class code is in a different compile unit, the compiler cannot optimize the calling program, or cannot check any hints. If the class code is known, perhaps by including it, inlining and then optimizing could probably do it right now. In other words - nice to have, but I do not see any practical way to do it.

                            1 Reply Last reply
                            0
                            • L Lost User

                              Jacob F. W. wrote:

                              Some optimizing compilers in the past have optimized away (N << 6) >> 6

                              Name & Shame please. Good god what a rookie mistake. Made all the worse by that being a common idiom for resetting some upper bits. (and for sign-extending sub-word fields, if N is signed) Anyway this could be interesting, have you worked it out a little? Proposed a syntax? Made a test implementation?

                              J Offline
                              J Offline
                              jsc42
                              wrote on last edited by
                              #16

                              Definitely a rookie mistake. (N << 6) >> 6 makes an assumption about the no of bits in the size of N and just clears the top 6 bits, leaving an unspecified no of bits uncleared. It chould be simplified to a far more efficient expression which also visually demonstrates the intent explicitly e.g. for a 32 bit number: N & 0x03FFFFFF .

                              L 1 Reply Last reply
                              0
                              • J jsc42

                                Definitely a rookie mistake. (N << 6) >> 6 makes an assumption about the no of bits in the size of N and just clears the top 6 bits, leaving an unspecified no of bits uncleared. It chould be simplified to a far more efficient expression which also visually demonstrates the intent explicitly e.g. for a 32 bit number: N & 0x03FFFFFF .

                                L Offline
                                L Offline
                                Lost User
                                wrote on last edited by
                                #17

                                Indeed. I even made a tool that does that (and more) automatically. On efnet in #omnimaga, type

                                haroldbot: unsigned N << 6 >> 6

                                And it will reply with "[N & 0x3ffffff]".

                                1 Reply Last reply
                                0
                                • J Jacob F W

                                  First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

                                  S Offline
                                  S Offline
                                  Stefan_Lang
                                  wrote on last edited by
                                  #18

                                  You may want to google (or duckduckgo if you prefer) 'lazy evaluation'. It is the concept of evaluating a complex term symbolically to avoid unneccesary intermediate steps. It's also used as a means to minimize the number of temporary variables for holding intermediary results. I've dabbled with the idea somewhat for my linear algebra library, but eventually gave up due to complexity issues and problems to capture all possible cases. More importantly, one of my major reasons for investigating this was avoiding temporaries, and that can now be achieved in a much more elegant way using C++11 move semantics. What you mention, the elimination of operations that get reversed is a case I've considered too, but found it to be so rare in practice that it wouldn't matter, performance-wise. Another case is the simplification of a combined operation: e. g. if I'm only interested in the 2nd coordinate of M*V, where M is a 3x3 matrix and V a 3*1 vector, then I don't need to calculate the whole result of M*V; instead it suffices to calculate row(M,2)*V, where row(M,i) is the ith row of the matrix M. Again, I found this case to be rare (or in fact non-existant) in practice, so I gave up on attempting to optimize it. The questions you should ask yourself is: 1. how likely is it in practice to find expressions or sequences of operations that can be optimized in the ways you describe? 2. How much performance overhead do you need to add per basic operation to be able to catch such cases? 3. Even if you can gain more performance (1) than you need to invest (2), would the user of your application even notice? I'm pretty sure the answer to 3 is always 'no'.

                                  J 1 Reply Last reply
                                  0
                                  • J Jacob F W

                                    First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

                                    K Offline
                                    K Offline
                                    Kirk Wood
                                    wrote on last edited by
                                    #19

                                    The thing I would say is, that this kind of optimization would rely on usage pattern. So to truly be efficient, your compiler needs to profile usage. Still doable, but the bar is being raised. But the real fun is that what you are describing would in most of the programming world be a library type function being called from various programs. Thus the usage profile could vary. So then you would need separate compiled programs.

                                    1 Reply Last reply
                                    0
                                    • S Stefan_Lang

                                      You may want to google (or duckduckgo if you prefer) 'lazy evaluation'. It is the concept of evaluating a complex term symbolically to avoid unneccesary intermediate steps. It's also used as a means to minimize the number of temporary variables for holding intermediary results. I've dabbled with the idea somewhat for my linear algebra library, but eventually gave up due to complexity issues and problems to capture all possible cases. More importantly, one of my major reasons for investigating this was avoiding temporaries, and that can now be achieved in a much more elegant way using C++11 move semantics. What you mention, the elimination of operations that get reversed is a case I've considered too, but found it to be so rare in practice that it wouldn't matter, performance-wise. Another case is the simplification of a combined operation: e. g. if I'm only interested in the 2nd coordinate of M*V, where M is a 3x3 matrix and V a 3*1 vector, then I don't need to calculate the whole result of M*V; instead it suffices to calculate row(M,2)*V, where row(M,i) is the ith row of the matrix M. Again, I found this case to be rare (or in fact non-existant) in practice, so I gave up on attempting to optimize it. The questions you should ask yourself is: 1. how likely is it in practice to find expressions or sequences of operations that can be optimized in the ways you describe? 2. How much performance overhead do you need to add per basic operation to be able to catch such cases? 3. Even if you can gain more performance (1) than you need to invest (2), would the user of your application even notice? I'm pretty sure the answer to 3 is always 'no'.

                                      J Offline
                                      J Offline
                                      Jacob F W
                                      wrote on last edited by
                                      #20
                                      1. I see them often enough that it started to bother me. The project that I'm working on, a BigInteger Class, has functions that require it to check all of the data. This means the processor is having to move large chunks of data each time it performs one of these functions, even in situations where I can look back to the previous function and see that in some cases it isn't needed. 2) I don't think that it would require that much more performance out of the compiler. As Espen Harlinn pointed out, the compiler can already detect redundant function calls. All we would be doing is pointing out more redundant calls. 3) Agreed, the user would likely not notice, but then again, unless you come out with a serious bug fix or huge improvement, when do they ever? How many users are aware of improvements to your program other than a new GUI? But that doesn't mean it's not better, and if it doesn't put too much of a strain on the Compiler or on the Developer, that doesn't mean it's not worth doing. Neither am I talking about making huge changes to the source code. All we would really need would be to add an extra line in the class.

                                      class MyClass
                                      {
                                      ...
                                      void Func1();
                                      void Func2() inverts Func1(); // Something similar to this.
                                      ...
                                      };

                                      Or in the case of IsZero and SetToZero

                                      class Myclass
                                      {
                                      ...
                                      void SetToZero();
                                      bool IsZero() checks SetToZero();
                                      // bool IsZero() checks SetToZero() { true }; // Maybe a Default value as well?
                                      };

                                      Again, all it's doing is pointing out more redundant calls to the Compiler. Unless you're doing a lot of work with math, I agree that inverse cases are rare, but they do still happen, and in my case, with a BigInt class, they can be costly.

                                      1 Reply Last reply
                                      0
                                      • J Jacob F W

                                        True, but we're not just talking about removing multiple, unneeded calls; we're talking about not calling the function at all. The idea is that if we're able to establish a relationship between SetToZero and IsZero, then if the compiler sees that we call SetToZero, and then later on in our own code call IsZero on the same object without modifying it inbetween calls, the compiler could replace the call with a default value.

                                        E Offline
                                        E Offline
                                        Espen Harlinn
                                        wrote on last edited by
                                        #21

                                        Jacob F. W. wrote:

                                        the compiler could replace the call with a default value.

                                        It would normally try to eliminate the code - whether it would succeed depends on the complexity of the code ...

                                        Espen Harlinn Principal Architect, Software - Goodtech Projects & Services AS Projects promoting programming in "natural language" are intrinsically doomed to fail. Edsger W.Dijkstra

                                        1 Reply Last reply
                                        0
                                        • J Jacob F W

                                          First off let me state that I am not saying that everyone should drop what they're doing and start doing implementing this. This is just an idea that I've been thinking about for a while, seems useful, and I would like to get some feedback on it from my fellow developers. The idea is to define relationships between methods within a class, to enable the Compiler to further optimize your program for Speed/Size/etc. Here's an example. I've been toying around with a Big Integer Class. In this class I have two functions, SetToZero() and IsZero(). In case it isn't obvious, SetToZero() sets the Integer to Zero, and IsZero() checks to see if this function is Zero. What I've noticed is that in several places I will call SetToZero() and then a function or two later I call IsZero(), but I haven't changed the Integer at all. I could of course write separate methods or pass a boolean value, but why do things the easy way :) Another example would be to define Inverse Operations. For instance if I call Function1, which adds 5, then Function2, which subtracts 5, and nothing changes the object between these two calls, then neither function needs to be called. There is of course some danger. Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess. So let me know what you guys think. If you like it or hate it, please let me know why.

                                          K Offline
                                          K Offline
                                          KP Lee
                                          wrote on last edited by
                                          #22

                                          Jacob F. W. wrote:

                                          Some optimizing compilers in the past have optimized away (N << 6) >> 6, even though the final answer isn't the same in every case. But the point here is that the developer can define the relationship, rather than have the compiler guess.

                                          Just looking at that, you can see it intends to push the left-most (highest) 6 bits off the edge and then reset the rest of the digits back to their original positions. I can see real repercussions if N is real because that type was never intended to be a bit-mapper, but all integer types should work properly with that. Order in processing is a critical idea that has no business being "optimized" away. What did the failed "optimizer" do? Determine that the bits won't move so it "optimizes" it into doing nothing? For any given number the final answer should be the same in every case. How can it not?

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups