Has the time come for development on a virtual machine?
-
I've been kicking around the idea of doing future development on a virtual machine once I get the major release out in the spring that I'm working on now. It's been two years with the current quad core pc, time to put it out to pasture or at least wipe the hard drive and start fresh again. My theory is you get a kick ass fast computer with 64bit processor and oodles of ram, choose a 64bit host operating system on the hazy criteria that it be the best for vm hosting (fastest to boot? Most efficient? Linux, Windows...not sure.) then create a 32bit virtual machine for general development with whatever is the best operating system for development and a set of others for testing under each operating system. Plus, since my dev machine is also my main personal use machine I guess a separate vm strictly for personal use. I'm thinking that we've almost reached the point where this is feasible (fast enough), but not sure. Is anyone doing their main development in a vm and how's the speed by comparison?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
Actually, I changed over to this methodology about 6 months ago and am very happy. I currently am running an Intell Q9300 Quad core machine with 4GB ram and 3 250 GB 7200rpm drives in Raid 1 config. My host OS is Vista 32 bit My VM of choice is VirtualBox (supported by Sun and is free) I have VMs with XP Pro, Win 98, Ubuntu Linux, Centos Linux, Windows Server 2008 and develop with VS2005, VS2008, and Eclipse in separate vms of course. I tend to run out of ram when I run more than 2 vms simultaneously :( so am thinking of changing out to a 64 bit host with more ram but the speed is acceptable even with multiple vms running on a Vista host. Granted, my applications tend to run less than 50K lines of code so if you are developing one of those million lines of code monsters ymmv. BTW the way virtualbox works (and probably the others as well) a portion of ram is sandboxed for each vm but the vm will take advantage of multi core/multiple processors when available. Great benefits i can see to this is your personal vm is in its own sandbox (unless you change the configuration of your network) and wont interfere with your dev or test vms. If you test something screwy, just delete the vm and mount a new copy and you are back up with a fresh install in seconds. Also, if you have enough resources, you can set up several vms in their own private network with domain controllers, web servers, firewalls, ... to test in a private connected environment simulating a workplace infrastructure. I have even thought about building a server to just run my vms and remote desktop/vnc/... into the vms from my main pc. Have to consider that expenditure now with the economy the way it is. :) Almost forgot. There is one downside to using vms and this was especially evident with the release of Vista. VMs give you a pc with a limited set of generic hardware so if all your testing in done in vms you might run into problems when your apps are released into the wild. Most of the Beta testers of Vista, including me :suss:, tested in vms so many problems with the HAL were not discovered until it was released into the wild. Bad on us, and not sure how many realize their part in the poor quality of the Vista release.
-
I've been kicking around the idea of doing future development on a virtual machine once I get the major release out in the spring that I'm working on now. It's been two years with the current quad core pc, time to put it out to pasture or at least wipe the hard drive and start fresh again. My theory is you get a kick ass fast computer with 64bit processor and oodles of ram, choose a 64bit host operating system on the hazy criteria that it be the best for vm hosting (fastest to boot? Most efficient? Linux, Windows...not sure.) then create a 32bit virtual machine for general development with whatever is the best operating system for development and a set of others for testing under each operating system. Plus, since my dev machine is also my main personal use machine I guess a separate vm strictly for personal use. I'm thinking that we've almost reached the point where this is feasible (fast enough), but not sure. Is anyone doing their main development in a vm and how's the speed by comparison?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
I do all my development in VM's now and in many ways performance has improved. My main PC is a quad core Q6600 (2.4GHz ish) with 8GB RAM and loads of disk space running Win 2003 Server 64 bit cut down light (i.e. very little excess baggage installed on the host PC) - all my day to day work is done in a number of VMWare virtual machines which I run with VMWare Player 2.5. I originally setup the VM's on the free version of VMWare Server - but once you have the hang of it and look around on the net a bit (e.g. sanbarrow.com) you can quite easily setup new VM's for VMWare player from scratch without paying for WorkStation. I chose player over Server and Workstation because it seem to work better with multiple VM's running at once as they end up in clear separate windows and I can easily have a separate VM running full screen on each monitor. I even "P2V'd" in my old PC that had all the day to day debris stuff on it like Office, Outlook etc etc and run that in a VM too. I have to support some pretty old code (VB6 stuff mainly) and still some VS 2003 code, mainly VS 2005 and some VS 2008. I also do quite a lot of installer work. If you have MSDN then you can do what I did - set up separate VM's for each environment - mine are all running XP with as little as needed installed and 1GB RAM assigned to each VM - I can happily run 4 or 5 VM's at a time with no real performance issues. Splitting each development environment has had a number of bonuses no listing everything a couple are: 1. No more agro when one env screws up the other (as e.g. VS 2005 seemed to with a parallel VS2003 install etc) 2. Turn off the VM of the env you are not using - then there is no overhead until they are needed A drawback has been that sometimes that "useful little utility" is not available in the VM you are in - so you either have to install it - or what I now do where possible is use portable apps (e.g. PortableApps.com) off a network share so I can run them from all VM's without installing (my basic philosophy has been only install what you REALLY need - I have a SandBox VM to try stuff first to see if I really do need it!). All the main VM vmdk disks and settings are running off one 2.5" 300GB Western Digital disk in an IcyBox caddy that has a docking station slot in the main PC attaching the disk via SATA but when undocked can also be attached by eSATA or USB. This disk is fully encrypted using TrueCrypt. All this extra load (encryption and contention on the single disk) does not seem too much of a problem - yes I am s
-
I've been kicking around the idea of doing future development on a virtual machine once I get the major release out in the spring that I'm working on now. It's been two years with the current quad core pc, time to put it out to pasture or at least wipe the hard drive and start fresh again. My theory is you get a kick ass fast computer with 64bit processor and oodles of ram, choose a 64bit host operating system on the hazy criteria that it be the best for vm hosting (fastest to boot? Most efficient? Linux, Windows...not sure.) then create a 32bit virtual machine for general development with whatever is the best operating system for development and a set of others for testing under each operating system. Plus, since my dev machine is also my main personal use machine I guess a separate vm strictly for personal use. I'm thinking that we've almost reached the point where this is feasible (fast enough), but not sure. Is anyone doing their main development in a vm and how's the speed by comparison?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
I wish I could move development over to a VM at work, that way I could install all of the stuff which may mess the machine up in a throw-away environment. Plus I could have an environment set up like the client machine for testing. I recently made a change to a product that threw up an issue on older machines I couldn't test for, if I could have had a VM set up with restricted hardware (e.g. 256Mb RAM, 4Gb HDD) I could have tested the change and found the issue. There are too many pro's and not enough cons in my opinion so go forth and VM your heart out :)
-
I do all my development in VM's now and in many ways performance has improved. My main PC is a quad core Q6600 (2.4GHz ish) with 8GB RAM and loads of disk space running Win 2003 Server 64 bit cut down light (i.e. very little excess baggage installed on the host PC) - all my day to day work is done in a number of VMWare virtual machines which I run with VMWare Player 2.5. I originally setup the VM's on the free version of VMWare Server - but once you have the hang of it and look around on the net a bit (e.g. sanbarrow.com) you can quite easily setup new VM's for VMWare player from scratch without paying for WorkStation. I chose player over Server and Workstation because it seem to work better with multiple VM's running at once as they end up in clear separate windows and I can easily have a separate VM running full screen on each monitor. I even "P2V'd" in my old PC that had all the day to day debris stuff on it like Office, Outlook etc etc and run that in a VM too. I have to support some pretty old code (VB6 stuff mainly) and still some VS 2003 code, mainly VS 2005 and some VS 2008. I also do quite a lot of installer work. If you have MSDN then you can do what I did - set up separate VM's for each environment - mine are all running XP with as little as needed installed and 1GB RAM assigned to each VM - I can happily run 4 or 5 VM's at a time with no real performance issues. Splitting each development environment has had a number of bonuses no listing everything a couple are: 1. No more agro when one env screws up the other (as e.g. VS 2005 seemed to with a parallel VS2003 install etc) 2. Turn off the VM of the env you are not using - then there is no overhead until they are needed A drawback has been that sometimes that "useful little utility" is not available in the VM you are in - so you either have to install it - or what I now do where possible is use portable apps (e.g. PortableApps.com) off a network share so I can run them from all VM's without installing (my basic philosophy has been only install what you REALLY need - I have a SandBox VM to try stuff first to see if I really do need it!). All the main VM vmdk disks and settings are running off one 2.5" 300GB Western Digital disk in an IcyBox caddy that has a docking station slot in the main PC attaching the disk via SATA but when undocked can also be attached by eSATA or USB. This disk is fully encrypted using TrueCrypt. All this extra load (encryption and contention on the single disk) does not seem too much of a problem - yes I am s
Good stuff James, yes you should write an article on it if you have time, there is clearly a lot of interest in it here judging by the responses. I'm curious about one thing: you said you need to reactivate XP when you run your vm on a different machine, I thought that was not necessary unless the settings for the VM's ram or disk are changed. Does it want to be reactivated solely because the host os and machine is different or is it a result of you having to change settings on the vm on your notebook?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
-
I've been kicking around the idea of doing future development on a virtual machine once I get the major release out in the spring that I'm working on now. It's been two years with the current quad core pc, time to put it out to pasture or at least wipe the hard drive and start fresh again. My theory is you get a kick ass fast computer with 64bit processor and oodles of ram, choose a 64bit host operating system on the hazy criteria that it be the best for vm hosting (fastest to boot? Most efficient? Linux, Windows...not sure.) then create a 32bit virtual machine for general development with whatever is the best operating system for development and a set of others for testing under each operating system. Plus, since my dev machine is also my main personal use machine I guess a separate vm strictly for personal use. I'm thinking that we've almost reached the point where this is feasible (fast enough), but not sure. Is anyone doing their main development in a vm and how's the speed by comparison?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
I recently saw a presentation from Stephen Rose (http://mcsegeek.wordpress.com/[^]) at the Inland Empire .Net User Group (http://www.iedotnetug.org[^]) in SoCal, USA on this very subject. He had a new HP laptop he had recently picked up at Fry's Electronics for $799 (after a few in-store discounts). It had 4GB of RAM and ran on the Intel Centrino2 technology. He had all his VPC images running off an external USB hard drive that ran at 72k rpm. During the presentation he had the base OS (Vista 64bit - I believe it was Vista Ultimate but I'm not sure) along with 4 VPC's all running concurrently! The 4 VPC's were: MS Server 2003 (general install with IIS for the web server), MS Server 2003 with MS SQL Server 2005 (the db tier), an XP dev image (with VS 2008, etc) and another XP image for the test client (just XP using IE 7). What was truly amazing is that everything ran without a hitch. The CPU wasn't spiked other than the initial start up of the VPC's. Everything ran as you would expect on dedicated machines. He didn't start them all up at the same time of course. There was no perceivable lag at all. The RAM was pegged, but that's to be expected with 5 O/S's running at the same time, each grabbing it's share. It wasn't as fast as dedicated quad-core servers for the web and SQL tier of course, but it was the fasted personal multi-server dev environment I've seen. I'm lucky if I ever get a "test" server to play with. Usually it's pretty obsolete hardware and then I have to share it with others. Having all this on VPC's has an incredible number of benefits. The keys, Stephen was saying, were to use the Centrino2 (or AMD equivalent) with the new virtual extensions, as much ram as you can get, and put your VPC's on a separate drive from your main O/S. He really likes external USB drives for the portability. Get an external drive that's as fast as you can find. I am definitely making this my next development setup.
-
I recently saw a presentation from Stephen Rose (http://mcsegeek.wordpress.com/[^]) at the Inland Empire .Net User Group (http://www.iedotnetug.org[^]) in SoCal, USA on this very subject. He had a new HP laptop he had recently picked up at Fry's Electronics for $799 (after a few in-store discounts). It had 4GB of RAM and ran on the Intel Centrino2 technology. He had all his VPC images running off an external USB hard drive that ran at 72k rpm. During the presentation he had the base OS (Vista 64bit - I believe it was Vista Ultimate but I'm not sure) along with 4 VPC's all running concurrently! The 4 VPC's were: MS Server 2003 (general install with IIS for the web server), MS Server 2003 with MS SQL Server 2005 (the db tier), an XP dev image (with VS 2008, etc) and another XP image for the test client (just XP using IE 7). What was truly amazing is that everything ran without a hitch. The CPU wasn't spiked other than the initial start up of the VPC's. Everything ran as you would expect on dedicated machines. He didn't start them all up at the same time of course. There was no perceivable lag at all. The RAM was pegged, but that's to be expected with 5 O/S's running at the same time, each grabbing it's share. It wasn't as fast as dedicated quad-core servers for the web and SQL tier of course, but it was the fasted personal multi-server dev environment I've seen. I'm lucky if I ever get a "test" server to play with. Usually it's pretty obsolete hardware and then I have to share it with others. Having all this on VPC's has an incredible number of benefits. The keys, Stephen was saying, were to use the Centrino2 (or AMD equivalent) with the new virtual extensions, as much ram as you can get, and put your VPC's on a separate drive from your main O/S. He really likes external USB drives for the portability. Get an external drive that's as fast as you can find. I am definitely making this my next development setup.
MattPenner wrote:
The keys, Stephen was saying, were to use the Centrino2 (or AMD equivalent) with the new virtual extensions, as much ram as you can get, and put your VPC's on a separate drive from your main O/S. He really likes external USB drives for the portability.
Good info thanks.
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
-
I've been kicking around the idea of doing future development on a virtual machine once I get the major release out in the spring that I'm working on now. It's been two years with the current quad core pc, time to put it out to pasture or at least wipe the hard drive and start fresh again. My theory is you get a kick ass fast computer with 64bit processor and oodles of ram, choose a 64bit host operating system on the hazy criteria that it be the best for vm hosting (fastest to boot? Most efficient? Linux, Windows...not sure.) then create a 32bit virtual machine for general development with whatever is the best operating system for development and a set of others for testing under each operating system. Plus, since my dev machine is also my main personal use machine I guess a separate vm strictly for personal use. I'm thinking that we've almost reached the point where this is feasible (fast enough), but not sure. Is anyone doing their main development in a vm and how's the speed by comparison?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
I just switched to VMware Server on a 64-bit host with 32-bit and 64-bit Windows XP guests 2 weeks ago. My main reason for switching was that I seem to have 1 or 2 hardware failures a year, and it always takes me a long time to rebuild my workstation just how I like it. With my primary development workstation running in a VM, I can just do a bare OS install on a system, throw VMware on, and I'm ready to go again. I also recently started working on building 64-bit versions of our software, so I need a 64-bit system to test them on. Performance under a VM on the new Athlon 64 X2 5000+ is better than on my previous workstation, a Pentium D 2.8 GHz. As long as you install a 64-bit host OS and at least 4 GB of RAM, you should be able to comfortably host several VMs. So far doing development on a VM is going pretty well for me, but there are a couple of minor gotchas. If you use multiple monitors, you'll have to change your work style a little. The VMware console window will only maximize to a single monitor and has some minor usability issues, so you'll probably want to connect to the VM using Remote Desktop with the -span switch. The taskbar will stretch across both monitors, so you'll lose a little screen real estate. I use WinSplit Revolution to "maximize" windows to one monitor or the other. If you run multiple VMs that are hard drive intensive, you'll want to put their virtual hard drives on separate physical hard drives (or use the raw disks for the VMs' hard drives). You also should settle on a single virtualization package for all the VMs running on one host system. If you have VMware VMs and VirtualPC VMs running on the same machine, for example, they'll both compete for the same resources and you'll see significant performance degradation. (I found out just the other day because I had to create a VirtualPC VM for someone while I was trying to work in my VMware VM...I ended up having to take a break while I waited for the one to finish.) If you just have one virtualization package running, it will be able to allocate system resources more efficiently. Right now I have My Documents and some other data folders stored on a network-shared physical drive on the host OS. My ultimate goal is to set up a 64-bit Solaris VM with a ZFS filesystem over a couple of big disks, and redirect all my VMs to store the home directories on that drive, but we'll see if I actually get around to that. Rob
modified on Tuesday, December 2, 2008 3:26 PM
-
I just switched to VMware Server on a 64-bit host with 32-bit and 64-bit Windows XP guests 2 weeks ago. My main reason for switching was that I seem to have 1 or 2 hardware failures a year, and it always takes me a long time to rebuild my workstation just how I like it. With my primary development workstation running in a VM, I can just do a bare OS install on a system, throw VMware on, and I'm ready to go again. I also recently started working on building 64-bit versions of our software, so I need a 64-bit system to test them on. Performance under a VM on the new Athlon 64 X2 5000+ is better than on my previous workstation, a Pentium D 2.8 GHz. As long as you install a 64-bit host OS and at least 4 GB of RAM, you should be able to comfortably host several VMs. So far doing development on a VM is going pretty well for me, but there are a couple of minor gotchas. If you use multiple monitors, you'll have to change your work style a little. The VMware console window will only maximize to a single monitor and has some minor usability issues, so you'll probably want to connect to the VM using Remote Desktop with the -span switch. The taskbar will stretch across both monitors, so you'll lose a little screen real estate. I use WinSplit Revolution to "maximize" windows to one monitor or the other. If you run multiple VMs that are hard drive intensive, you'll want to put their virtual hard drives on separate physical hard drives (or use the raw disks for the VMs' hard drives). You also should settle on a single virtualization package for all the VMs running on one host system. If you have VMware VMs and VirtualPC VMs running on the same machine, for example, they'll both compete for the same resources and you'll see significant performance degradation. (I found out just the other day because I had to create a VirtualPC VM for someone while I was trying to work in my VMware VM...I ended up having to take a break while I waited for the one to finish.) If you just have one virtualization package running, it will be able to allocate system resources more efficiently. Right now I have My Documents and some other data folders stored on a network-shared physical drive on the host OS. My ultimate goal is to set up a 64-bit Solaris VM with a ZFS filesystem over a couple of big disks, and redirect all my VMs to store the home directories on that drive, but we'll see if I actually get around to that. Rob
modified on Tuesday, December 2, 2008 3:26 PM
ensoftrob wrote:
I just switched to VMware Server on a 64-bit host
Which 64bit host? I'm considering Linux as well as Windows since VMWare supports both as host. I'm thinking performance might be better on one than the other with less resources being consumed by the host OS.
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
-
ensoftrob wrote:
I just switched to VMware Server on a 64-bit host
Which 64bit host? I'm considering Linux as well as Windows since VMWare supports both as host. I'm thinking performance might be better on one than the other with less resources being consumed by the host OS.
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
I've been trying both Ubuntu 8.10 and Windows XP 64-bit as the host OS. I'm thinking about keeping both just in case one somehow hoses itself and I'm too pressed for time to reinstall/reimage the host. I think the only differentiating factor affecting performance is the quality of the driver for your graphics card on the host OS. Originally I was using the onboard Nvidia 8200 graphics to run my dual 19" displays. Performance under Linux was atrocious--I could see the screen repaint from top to bottom just when switching between application windows in the guest OS. Later, when I tried Windows as my host OS, the maximum resolution it would allow in multi-monitor mode was 1024x768x32bpp or 1280x1024x16bpp. My theory is that they artificially disabled 1280x1024x32bpp because they knew the performance sucked. Eventually I concluded that the driver for the onboard Nvidia 8200 graphics chip sucks in both Linux and Windows, and installed the X1550 from my old workstation. I remember running dual monitors at 1280x1024x32 six years ago and having no performance problems whatsoever on a Matrox G550, so you could say I'm pretty annoyed that the onboard Nvidia 8200 graphics can't properly handle dual displays. I haven't tried the Linux host again since installing the discrete graphics card because I ran out of "fun" time, but I would bet that the graphics performance is now up to par. Aside from the slow screen redraws (Linux host) and limited resolution (Windows host) under the old grpahics card, the performance was not noticeably different under a Linux vs. Windows host OS. This isn't the first time I've been burned by a crappy onboard graphics chipset, but I had hope after reading some pretty decent reviews of this chipset and the AMD 780G (I guess I was wrong). I remember years ago running one of our in-house code analysis tools and getting more than a 10x speedup when I happened to minimize a CMD console window which was displaying the logging information real-time. The integrated Intel graphics killed performance because it couldn't keep up with the console output, but when the output didn't have to be displayed by the graphics chip, the analysis sped right along. Rob
-
John, I do development via a couple different VMs for various reasons, on my current machine, and have no problems doing it that way. My machine specs are as follows: Intel Core 2 Duo CPU running @ 2.33GHz with 4GB ram and a 250mb STAT drive. I use Microsoft Virtual PC 2007, and have a Vista Business VM and two XP VMs with different versions of Office/VSTO, a Win2K VM I use for backwards compatibility testing, from time-to-time. I do VSTO development work on the Vista and two XP VMs under VS 2005 and 2008, with no problems. I assign 2mb to the Vista VM and 1mb to the XP vms. Works like a champ; would probably work even faster if I had more ram to assign, but my point is, with what I have, it works ... The ONLY issues I have had, is attempting to burn an ISO image with open VMs will almost certainly result in a BSOD; so don't cross the streams, it would be bad. Otherwise, have at it, works just fine.
Hardware is very similar to my setup, although I use VMWare w/ VMWare Tools. Host is usually WinXP with any unneeded bloat removed. I can't help but wonder, though, if you meant 2GB and 1GB of RAM respectively for your VM's. Only 1_MB_ or 2_MB_ of RAM seems like it would make the system just a little sluggish. I mean, maybe you could run MenuetOS (Kolibri) or DOS on that, but not much more... ;)
-
VMWare allows you to run a 64bit guest os on a 32bit machine, you just need a 64bit cpu. I test my software under XP x64 all the time in VMWare on my Vista 32bit host os.
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
I've been doing everything 32-bit on my setup. I've heard of lots of trouble caused by mixing 32- and 64-bit guest and host OS's. A year or so ago I was looking into 64-bit Windows host for 32-bit and 64-bit guest OS's, and I saw a tremendous number of support issues on the forums compared to the 32-bit stuff. Have you run into any problems? Or have they fixed most of that stuff? If so, then I'd buy an upgrade.
-
Thank you! I'd even add: http://www.custompc.co.uk/news/605271/windows-7-allows-directx-10-acceleration-on-the-cpu.html[^] Oh my! They got it all wrong! I guess than rather chasing yahoo as a substitute for google they should have bought nvidia and electronic arts. Hey guys, you missed the google train 10 years ago, so move on now...
It must be a master plan to make a mouse move on Windows7 go from 30% to 40% CPU utilisation without users noticing.. They sure are blogging and hyping continously, such as speeding up the dispatcher, desktop bar that can read minds, and multi-foot-and-mouth navigation etc, yet the entire CPU cache gets invalidated on simple WCF connection or WPF button.. good heavens, what an advance in..err. anything.
-
I've been kicking around the idea of doing future development on a virtual machine once I get the major release out in the spring that I'm working on now. It's been two years with the current quad core pc, time to put it out to pasture or at least wipe the hard drive and start fresh again. My theory is you get a kick ass fast computer with 64bit processor and oodles of ram, choose a 64bit host operating system on the hazy criteria that it be the best for vm hosting (fastest to boot? Most efficient? Linux, Windows...not sure.) then create a 32bit virtual machine for general development with whatever is the best operating system for development and a set of others for testing under each operating system. Plus, since my dev machine is also my main personal use machine I guess a separate vm strictly for personal use. I'm thinking that we've almost reached the point where this is feasible (fast enough), but not sure. Is anyone doing their main development in a vm and how's the speed by comparison?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
-
No can do if you're using OpenGL or DirectX -- they both require direct access to the physical video hardware right now. At least, that was the state of affairs last time I looked into it myself a couple years ago.
patbob
I don't do any of that kind of development but I checked and apparently VMWare has beta support for some limited subset of DirectX, however to fill in the gap there's this: http://www.cs.toronto.edu/~andreslc/xen-gl/[^] OpenGL apps running inside a Virtual Machine (VM) can use VMGL to take advantage of graphics hardware acceleration. VMGL can be used on VMware guests, Xen HVM domains (depending on hardware virtualization extensions) and Xen paravirtual domains, using XVnc or the virtual framebuffer. Although we haven't tested it, VMGL should work for qemu, KVM, and VIrtualBox. VMGL is available for X11-based guest OS's: Linux, FreeBSD and OpenSolaris. VMGL is GPU-independent: we support ATI, Nvidia and Intel GPUs
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
-
I've been doing everything 32-bit on my setup. I've heard of lots of trouble caused by mixing 32- and 64-bit guest and host OS's. A year or so ago I was looking into 64-bit Windows host for 32-bit and 64-bit guest OS's, and I saw a tremendous number of support issues on the forums compared to the 32-bit stuff. Have you run into any problems? Or have they fixed most of that stuff? If so, then I'd buy an upgrade.
-
Good stuff James, yes you should write an article on it if you have time, there is clearly a lot of interest in it here judging by the responses. I'm curious about one thing: you said you need to reactivate XP when you run your vm on a different machine, I thought that was not necessary unless the settings for the VM's ram or disk are changed. Does it want to be reactivated solely because the host os and machine is different or is it a result of you having to change settings on the vm on your notebook?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
XP has a "points system" to decide if the hardware has changed enough to need reactivation. Some of the big "point" scorers are: 1. CPU type 2. Network card (determined by MAC address) 3. RAM size (with the different bands of size considered as a changed being a bit old fashioned as they top out at 1GB) By default VMWare uses an auto-assigned MAC address to the network cards - I found that the change of CPU between my main PC and the notebook plus the change of MAC address meant that it wanted to reactivate each time I moved the VM between the two. Some of the tricks I learnt were: 1. Give all VM's 1028MB RAM (RAM size is in 4MB increments so I went one above what xp treats as top whack so I can upgrade a vm's RAM to 2GB if needed without XP noticing the change) - a great thing about VM's is if you have a RAM heavy app you want to use for just a bit just add the RAM and later take it out again by editing the text .vmx 2. Use manually assigned MAC addresses in the VMWare .vmx file 3. Define XP VM as a "Portable Computer" in the Hardware Profiles - as XP is applies a more generous points system to notebooks to allow for docking stations etc. This seems to have been enough in my case for XP to stop it's nasty habits. Like this the VM does not need to be changed at all between machines.
-
MattPenner wrote:
The keys, Stephen was saying, were to use the Centrino2 (or AMD equivalent) with the new virtual extensions, as much ram as you can get, and put your VPC's on a separate drive from your main O/S. He really likes external USB drives for the portability.
Good info thanks.
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
"He really likes external USB drives for the portability" So did I - but found USB a bit slow - so my portable disk docks in main machine as SATA. When undocked on notebook it can connect by USB or eSATA so I added a eSATA Express Card to the notebook for faster access when mobile - on trains etc that is a bit cumbersome (needs a thicker eSATA cable, eSATA card sticks out a bit and disk power has to come from USB anyway) so I just use USB.
-
I don't do any of that kind of development but I checked and apparently VMWare has beta support for some limited subset of DirectX, however to fill in the gap there's this: http://www.cs.toronto.edu/~andreslc/xen-gl/[^] OpenGL apps running inside a Virtual Machine (VM) can use VMGL to take advantage of graphics hardware acceleration. VMGL can be used on VMware guests, Xen HVM domains (depending on hardware virtualization extensions) and Xen paravirtual domains, using XVnc or the virtual framebuffer. Although we haven't tested it, VMGL should work for qemu, KVM, and VIrtualBox. VMGL is available for X11-based guest OS's: Linux, FreeBSD and OpenSolaris. VMGL is GPU-independent: we support ATI, Nvidia and Intel GPUs
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
-
XP has a "points system" to decide if the hardware has changed enough to need reactivation. Some of the big "point" scorers are: 1. CPU type 2. Network card (determined by MAC address) 3. RAM size (with the different bands of size considered as a changed being a bit old fashioned as they top out at 1GB) By default VMWare uses an auto-assigned MAC address to the network cards - I found that the change of CPU between my main PC and the notebook plus the change of MAC address meant that it wanted to reactivate each time I moved the VM between the two. Some of the tricks I learnt were: 1. Give all VM's 1028MB RAM (RAM size is in 4MB increments so I went one above what xp treats as top whack so I can upgrade a vm's RAM to 2GB if needed without XP noticing the change) - a great thing about VM's is if you have a RAM heavy app you want to use for just a bit just add the RAM and later take it out again by editing the text .vmx 2. Use manually assigned MAC addresses in the VMWare .vmx file 3. Define XP VM as a "Portable Computer" in the Hardware Profiles - as XP is applies a more generous points system to notebooks to allow for docking stations etc. This seems to have been enough in my case for XP to stop it's nasty habits. Like this the VM does not need to be changed at all between machines.
-
I've been kicking around the idea of doing future development on a virtual machine once I get the major release out in the spring that I'm working on now. It's been two years with the current quad core pc, time to put it out to pasture or at least wipe the hard drive and start fresh again. My theory is you get a kick ass fast computer with 64bit processor and oodles of ram, choose a 64bit host operating system on the hazy criteria that it be the best for vm hosting (fastest to boot? Most efficient? Linux, Windows...not sure.) then create a 32bit virtual machine for general development with whatever is the best operating system for development and a set of others for testing under each operating system. Plus, since my dev machine is also my main personal use machine I guess a separate vm strictly for personal use. I'm thinking that we've almost reached the point where this is feasible (fast enough), but not sure. Is anyone doing their main development in a vm and how's the speed by comparison?
"It's so simple to be wise. Just think of something stupid to say and then don't say it." -Sam Levenson
Now that VMware has released ESXi if I had your setup I'd buy a motherboard that works with your processor (and buy another cpu if your not running dual yet) that supports more than 4 GB ram and put at least 16 GB in it. You can do that today for under $1000 if you buy a super high end server motherboard. Then load it up with the high speed sata disks you already have, install VMware's ESXi on it (bare-metal) and load your VMs off of that. Major benefits of ESXi over VMWare or Virtual PC is first you don't have the overhead of a full OS. Second, ESXi merges similar pages in memory so if you run 6 VMs all running XP you only pay the memory overhead for the XP OS once. Third, and probably most important, you get memory overcommit which allows you to assign each of your VMs more ram than you physically have in the system (to a point) since ESXi is smart enough to only allocate what is being used by the running VM. You'll also see much better usage of your quad-core and I believe ESXi supports cpu affinity as well for individual VMs. Your basically getting the same capabilities we use today in the data center at home (for free!).