To Link or Not To Link? (Question of Efficiency)
-
General architectural question: From what I've been able to gather, there are basically 3 approaches to handling code that is reusable across multiple projects: 1. Dynamic linking: build a DLL, use API functions like LoadLibrary() and GetProcAddress() to link to the desired functionality at run time. 2. Static linking: build a DLL, use a .lib file to establish the external proc addresses at compile time. 3. Static library: build a LIB only, use as input that becomes part of the .exe file for a given project. It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it. But the issue seems a little more fuzzy between #2 and #3. Is there a significant advantage to one over the other in general? What about in high-demand applications such as video games or real-time simulators, or where the library function is expected to be called dozens of times per second? Thanks for your help.
-
General architectural question: From what I've been able to gather, there are basically 3 approaches to handling code that is reusable across multiple projects: 1. Dynamic linking: build a DLL, use API functions like LoadLibrary() and GetProcAddress() to link to the desired functionality at run time. 2. Static linking: build a DLL, use a .lib file to establish the external proc addresses at compile time. 3. Static library: build a LIB only, use as input that becomes part of the .exe file for a given project. It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it. But the issue seems a little more fuzzy between #2 and #3. Is there a significant advantage to one over the other in general? What about in high-demand applications such as video games or real-time simulators, or where the library function is expected to be called dozens of times per second? Thanks for your help.
I think that in performance terms there is very little to choose between 2 and 3. The trade-off comes when you have lots of apps running in your machine - using a DLL you have only one copy of each library function in memory, with the static library you have one copy for each app.
Unrequited desire is character building. OriginalGriff
-
General architectural question: From what I've been able to gather, there are basically 3 approaches to handling code that is reusable across multiple projects: 1. Dynamic linking: build a DLL, use API functions like LoadLibrary() and GetProcAddress() to link to the desired functionality at run time. 2. Static linking: build a DLL, use a .lib file to establish the external proc addresses at compile time. 3. Static library: build a LIB only, use as input that becomes part of the .exe file for a given project. It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it. But the issue seems a little more fuzzy between #2 and #3. Is there a significant advantage to one over the other in general? What about in high-demand applications such as video games or real-time simulators, or where the library function is expected to be called dozens of times per second? Thanks for your help.
Xpnctoc wrote:
It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it.
Without a context that is meaningless. For starters .Net and Java always do dynamic loads. Second performance is impacted much more significantly by requirements, architecture and design for most business applications. Third some business required functionality cannot be implemented with dynamic loads. For example hotloads of a 24x7 server.
-
General architectural question: From what I've been able to gather, there are basically 3 approaches to handling code that is reusable across multiple projects: 1. Dynamic linking: build a DLL, use API functions like LoadLibrary() and GetProcAddress() to link to the desired functionality at run time. 2. Static linking: build a DLL, use a .lib file to establish the external proc addresses at compile time. 3. Static library: build a LIB only, use as input that becomes part of the .exe file for a given project. It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it. But the issue seems a little more fuzzy between #2 and #3. Is there a significant advantage to one over the other in general? What about in high-demand applications such as video games or real-time simulators, or where the library function is expected to be called dozens of times per second? Thanks for your help.
Xpnctoc wrote:
It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it.
During initialization, one creates a method-pointer, say, a delegate. During execution, you fire it. Can be pretty darn fast.
Xpnctoc wrote:
But the issue seems a little more fuzzy between #2 and #3. Is there a significant advantage to one over the other in general?
If your requirements require such speed, you'd be best of in using QNX[^].
Bastard Programmer from Hell :suss:
-
Xpnctoc wrote:
It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it.
Without a context that is meaningless. For starters .Net and Java always do dynamic loads. Second performance is impacted much more significantly by requirements, architecture and design for most business applications. Third some business required functionality cannot be implemented with dynamic loads. For example hotloads of a 24x7 server.
Well I guess I could have been a little clearer, but since, as you said, .NET and Java always do dynamic loads, that's obviously not what I'm talking about. I'm talking about plain old C++. I also did specifically mention video games and real-time simulations.
-
Xpnctoc wrote:
It seems pretty obvious that #1 would represent the worst execution time because of the need to look up a proc address before calling it.
During initialization, one creates a method-pointer, say, a delegate. During execution, you fire it. Can be pretty darn fast.
Xpnctoc wrote:
But the issue seems a little more fuzzy between #2 and #3. Is there a significant advantage to one over the other in general?
If your requirements require such speed, you'd be best of in using QNX[^].
Bastard Programmer from Hell :suss:
-
I think that in performance terms there is very little to choose between 2 and 3. The trade-off comes when you have lots of apps running in your machine - using a DLL you have only one copy of each library function in memory, with the static library you have one copy for each app.
Unrequited desire is character building. OriginalGriff
-
That's kind of what I thought. Does having a larger executable result in slower app launch time? When exactly to statically linked DLLs get loaded -- on application start, first call, etc...?
Xpnctoc wrote:
Does having a larger executable result in slower app launch time?
Probably, but unless you are loading and unloading thousands of times a minute it is unlikely to be an issue.
Xpnctoc wrote:
When exactly to statically linked DLLs get loaded
On first call as far as I am aware.
Unrequited desire is character building. OriginalGriff
-
Well I guess I could have been a little clearer, but since, as you said, .NET and Java always do dynamic loads, that's obviously not what I'm talking about. I'm talking about plain old C++. I also did specifically mention video games and real-time simulations.
Xpnctoc wrote:
I also did specifically mention video games and real-time simulations.
amic calls. Could be mistaken though. Presumably you are familiar with 'video drivers' on PCs? The things that directly drive all the video on the box. They are pluggable components in the OS.