Speed up compilation by ramdisk?
-
Hello, in good old Atari-days I used to love my ramdisk. On PC this doesn't seem to be a serious topic (I wonder why). Compilation-time is the dominant bottleneck for most C++-programmers. So, in times of GB-Ram: why not use a ramdisk to reduce the heavy amount of harddisk-access? Has anyone experience in setting up such an environment? (which ramdisk? how large? what moved to the ramdisk: the temp-dir? the PCH? the whole project-files? how to setup this in VC++ ? expected speed-gain?) Thanks Christof
-
Hello, in good old Atari-days I used to love my ramdisk. On PC this doesn't seem to be a serious topic (I wonder why). Compilation-time is the dominant bottleneck for most C++-programmers. So, in times of GB-Ram: why not use a ramdisk to reduce the heavy amount of harddisk-access? Has anyone experience in setting up such an environment? (which ramdisk? how large? what moved to the ramdisk: the temp-dir? the PCH? the whole project-files? how to setup this in VC++ ? expected speed-gain?) Thanks Christof
Christof Schardt wrote:
On PC this doesn't seem to be a serious topic
The current generation of PC's and Windows has overall better performance when any 'extra' available RAM is left to the operating system. Windows uses a portion of free RAM as a disk cache (there's the RAM disk you wondered about). The advantage today is that the O/S manages it for you, and you don't have to worry about copying files back and forth. There are a number of things you can do to improve build performance. Make sure that any header files that don't change a lot are included in your precompiled headers. Break your projects into smaller pieces that are linked into the final application as object libraries (.LIB's), or even at runtime as DLL's. Your objective here is to reduce the amount of code that gets recompiled when you make changes, rather than to decrease the overall compile time itself. Most of the time, you are working on a small portion of the code, and don't need to recompile the whole thing.
Software Zen:
delete this;
-
Hello, in good old Atari-days I used to love my ramdisk. On PC this doesn't seem to be a serious topic (I wonder why). Compilation-time is the dominant bottleneck for most C++-programmers. So, in times of GB-Ram: why not use a ramdisk to reduce the heavy amount of harddisk-access? Has anyone experience in setting up such an environment? (which ramdisk? how large? what moved to the ramdisk: the temp-dir? the PCH? the whole project-files? how to setup this in VC++ ? expected speed-gain?) Thanks Christof
Christof Schardt wrote:
On PC this doesn't seem to be a serious topic (I wonder why).
I used a RAM disk exhaustively back with MS-DOS and Windows 3.x. It was a real time saver for compilations.
"Take only what you need and leave the land as you found it." - Native American Proverb
-
Christof Schardt wrote:
On PC this doesn't seem to be a serious topic
The current generation of PC's and Windows has overall better performance when any 'extra' available RAM is left to the operating system. Windows uses a portion of free RAM as a disk cache (there's the RAM disk you wondered about). The advantage today is that the O/S manages it for you, and you don't have to worry about copying files back and forth. There are a number of things you can do to improve build performance. Make sure that any header files that don't change a lot are included in your precompiled headers. Break your projects into smaller pieces that are linked into the final application as object libraries (.LIB's), or even at runtime as DLL's. Your objective here is to reduce the amount of code that gets recompiled when you make changes, rather than to decrease the overall compile time itself. Most of the time, you are working on a small portion of the code, and don't need to recompile the whole thing.
Software Zen:
delete this;
You can also try to use the
#pragma once
in your header fils so they are not parsed multiple times, even if included multiple times. I also find that keeping my HD defragmented helps a lot too. If you run a virus scanner with 'on access' scanning, disable the folders containing your source files. Disable some output options you might not use, such as class browser (I don't use it, speeds up build times since not making the BSC file for each project). -
You can also try to use the
#pragma once
in your header fils so they are not parsed multiple times, even if included multiple times. I also find that keeping my HD defragmented helps a lot too. If you run a virus scanner with 'on access' scanning, disable the folders containing your source files. Disable some output options you might not use, such as class browser (I don't use it, speeds up build times since not making the BSC file for each project).Of course I have an include-guard in all my headers. BTW: Is there a difference between
#pragma once ... // my code
and#ifnded XY_INCLUDED #define XY_INCLUDED ... // my code #endif
or is it just a MS-shorthand for this? Thanks anyway for your additional hints. Christof -
Of course I have an include-guard in all my headers. BTW: Is there a difference between
#pragma once ... // my code
and#ifnded XY_INCLUDED #define XY_INCLUDED ... // my code #endif
or is it just a MS-shorthand for this? Thanks anyway for your additional hints. ChristofI do not know for certain. It is my suspicion that the
#pragma once
will cause the file to be added to a table that it has already been processed, whereas the#ifndef
does not necessarily preclude reparsing of the entire header, as one does not necessarily include or preclude the entire file within the confines of an#ifndef/#endif
set. The#pragma once
, however, seems to imply, "hey, once you read this file for this compilation unit, don't bother reading it again". I favor adding the#pragma once
if I now a header would never require reparsing within a single compilation unit. I prefer to create headers that do not depend upon such reparsing activity.