Dev systems shouldn't be backed up!
-
A dev system is much more than just the source files that are copied to it. A source control system is not the same as a backup system.
And you'd trust your company to keep a backup of it? Image, zrchive and up into the backupped file server and you're on.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
-
... that is, according to one our senior IT infrastructure guys! Our "ApplicationHost.config" file got wiped a few days back - and, (before we found out about inetpub\history), we asked the infrastructure team to restore it from back-up. "It's a dev system. We don't back-up Dev systems.", was the reply. When we challenged this, their team lead responded: "Dev systems shouldn't be backed up." :wtf: Wha? :confused: First of all, it always used to be backed up - so when did that change? Without us knowing? Secondly, the Dev system is the most volatile and most likely to get b*ggered by a developer. Surely, that alone, justifies back-up? I am pretty much gob-smacked by this. :wtf: Is this just me? :~
One of the problems here is that everything has to be backed up and that leads to long delays in getting databases and such spun up. I used to be able to have local databases as sandboxes, but they won't allow that anymore.
-
... that is, according to one our senior IT infrastructure guys! Our "ApplicationHost.config" file got wiped a few days back - and, (before we found out about inetpub\history), we asked the infrastructure team to restore it from back-up. "It's a dev system. We don't back-up Dev systems.", was the reply. When we challenged this, their team lead responded: "Dev systems shouldn't be backed up." :wtf: Wha? :confused: First of all, it always used to be backed up - so when did that change? Without us knowing? Secondly, the Dev system is the most volatile and most likely to get b*ggered by a developer. Surely, that alone, justifies back-up? I am pretty much gob-smacked by this. :wtf: Is this just me? :~
-
IT is also being squeezed for manpower, so their typical response is only dealing with systems set up to their standards and maintained by themselves. Can't really blame them if their manpower to deal with exceptions has been taken away by a bean-counter not understanding consequences. I would expect the bean-counter got a bonus for saving this manpower in IT. Sure you are going to waste a lot more manpower - but that waste will look like you not being productive, so that's clearly not the bean-counters fault. So in short, he made the right choice seen from the top. If you work for a software development company it is typically a bit easier to get dev systems included (you can argue they are essential to the core business - i.e. server down, production halted) - if software development is just a "side-kick" in the business, then it is going to be hard and you should probably do your own backup. IT might be able to provide a file share you can dump the files on and then they will back up those. Alternatively create scripts setting up the servers - and have that in source control - which is hopefully backed up....
-
And you'd trust your company to keep a backup of it? Image, zrchive and up into the backupped file server and you're on.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
YMMV, but in a lot of organizations it's IT's job to back everything up. Some companies go out of their way to ensure their developers don't waste spend their time doing menial IT tasks. Personally - at home - I don't do any backup from within an OS. I have everything running in VMs, and simply copy disk image files onto other drives, either across my LAN or external drives on USB. Has served me well for over a decade, and even restoring to another host was rather trivial. Not recommended for all scenarios however (eg, I'm not including VM metadata). [Edit] Another random thought: Depending on what you do with them, test systems could definitely disqualify for backing up. I very often find myself putting together test VMs that I would consider to be throwaway systems. But I'd call those QA machines, not dev machines. A dev machine is very often a lot more complex in setting up juuuuust right.
-
... that is, according to one our senior IT infrastructure guys! Our "ApplicationHost.config" file got wiped a few days back - and, (before we found out about inetpub\history), we asked the infrastructure team to restore it from back-up. "It's a dev system. We don't back-up Dev systems.", was the reply. When we challenged this, their team lead responded: "Dev systems shouldn't be backed up." :wtf: Wha? :confused: First of all, it always used to be backed up - so when did that change? Without us knowing? Secondly, the Dev system is the most volatile and most likely to get b*ggered by a developer. Surely, that alone, justifies back-up? I am pretty much gob-smacked by this. :wtf: Is this just me? :~
-
... that is, according to one our senior IT infrastructure guys! Our "ApplicationHost.config" file got wiped a few days back - and, (before we found out about inetpub\history), we asked the infrastructure team to restore it from back-up. "It's a dev system. We don't back-up Dev systems.", was the reply. When we challenged this, their team lead responded: "Dev systems shouldn't be backed up." :wtf: Wha? :confused: First of all, it always used to be backed up - so when did that change? Without us knowing? Secondly, the Dev system is the most volatile and most likely to get b*ggered by a developer. Surely, that alone, justifies back-up? I am pretty much gob-smacked by this. :wtf: Is this just me? :~
I've been squeezed for disk space by control freaks and supervising sadists. So, it's not just you. The subtext was: hardware is more valuable than a system programmer's weekends off.
It was only in wine that he laid down no limit for himself, but he did not allow himself to be confused by it. ― Confucian Analects: Rules of Confucius about his food
-
Yes, I can blame them. They should be automating these backups so they don't take manpower to do.
Automating also takes manpower. Taking away manpower saves money. Bean-counters get bonus for saving money. Bean-counters do not get blame for missing backups. It is a VERY easy decision to make for the bean-counters. The problem is too much thinking in boxes and managing by them. This is typically NOT done by the people on the floor in any department. It is pushed down from above by setting stupid goals and budgets. So no, I do not blame IT in a case like this, I blame upper management for turning everything into a spread-sheet where anything they don't understand is simply deleted.
-
... that is, according to one our senior IT infrastructure guys! Our "ApplicationHost.config" file got wiped a few days back - and, (before we found out about inetpub\history), we asked the infrastructure team to restore it from back-up. "It's a dev system. We don't back-up Dev systems.", was the reply. When we challenged this, their team lead responded: "Dev systems shouldn't be backed up." :wtf: Wha? :confused: First of all, it always used to be backed up - so when did that change? Without us knowing? Secondly, the Dev system is the most volatile and most likely to get b*ggered by a developer. Surely, that alone, justifies back-up? I am pretty much gob-smacked by this. :wtf: Is this just me? :~
Actually, that makes sense. The two categories of important data on a dev system are a) developed product b) development environment a) is already backed up in source control. b) has to be easy to set up in the first place (i.e. for onboarding) so you gotta have a single installer or a script which sets everything up. That script, of course, has to be backed up, but the actual dev machine can be set up from scratch on any empty system.
-
YMMV, but in a lot of organizations it's IT's job to back everything up. Some companies go out of their way to ensure their developers don't waste spend their time doing menial IT tasks. Personally - at home - I don't do any backup from within an OS. I have everything running in VMs, and simply copy disk image files onto other drives, either across my LAN or external drives on USB. Has served me well for over a decade, and even restoring to another host was rather trivial. Not recommended for all scenarios however (eg, I'm not including VM metadata). [Edit] Another random thought: Depending on what you do with them, test systems could definitely disqualify for backing up. I very often find myself putting together test VMs that I would consider to be throwaway systems. But I'd call those QA machines, not dev machines. A dev machine is very often a lot more complex in setting up juuuuust right.
Exactly. All my dev is done in VMs which I can snapshot before a major change/update in case it borks. These VMs are replicated automatically between two machines (desktop and a dev laptop) and are also, though only occasionally, physically, backed up to another server just in case! This is all in addition to source control. On more than one occasion I have had to either copy back one of the clones or at least fire one up to recover something completely destroyed by an OS update etc. (This applies to Linux and MS stuff!) It is hard to envisage any setup no matter how small or large where backing up dev machines, at least occasionally (when known to be in good working order, say after initial setup and config) is not a good idea.
-
5teveH wrote:
Our "ApplicationHost.config" file got wiped a few days back
Why wasn't it under source control?
5teveH wrote:
Dev systems shouldn't be backed up.
Arguable - GIT is there for a reason. You don't back up dev systems because they are not stable, any snapshot in time won't be accurate or valid for long, or it may not be valid at all (i.e. a broken build, broken system due to ongoing changes). GIT is there with branches and commit messages, when you get to a milestone or at least a stable version you just push it in a master / release / whatyoucallit branch with a meaningful commit message and voila, here comes the vackup and the diff history in a single package.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
-
5teveH wrote:
Our "ApplicationHost.config" file got wiped a few days back
Why wasn't it under source control?
5teveH wrote:
Dev systems shouldn't be backed up.
Arguable - GIT is there for a reason. You don't back up dev systems because they are not stable, any snapshot in time won't be accurate or valid for long, or it may not be valid at all (i.e. a broken build, broken system due to ongoing changes). GIT is there with branches and commit messages, when you get to a milestone or at least a stable version you just push it in a master / release / whatyoucallit branch with a meaningful commit message and voila, here comes the vackup and the diff history in a single package.
GCS d--(d-) s-/++ a C++++ U+++ P- L+@ E-- W++ N+ o+ K- w+++ O? M-- V? PS+ PE- Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
den2k88 wrote:
Arguable - GIT is there for a reason. You don't back up dev systems because they are not stable, any snapshot in time won't be accurate or valid for long, or it may not be valid at all (i.e. a broken build, broken system due to ongoing changes).
This argument leaves out the time it takes to rebuild a DEV PC, which isn't trivial. My employer has several standard disk images, but none contain all tools a particular developer needs. This also doesn't cover the loss of items backed up to the local git but not the main instance.
-
Actually, that makes sense. The two categories of important data on a dev system are a) developed product b) development environment a) is already backed up in source control. b) has to be easy to set up in the first place (i.e. for onboarding) so you gotta have a single installer or a script which sets everything up. That script, of course, has to be backed up, but the actual dev machine can be set up from scratch on any empty system.
c) Various servers needed for the CD/CI pipeline (Build agents, deployment targets, ...). Either you need a backup of these, or (my preference) also script the setup of these and keep that in source control. That said, scripting our LDAP test server turned out to be rather frustrating, so that is just a "step by step" setup guide in our Wiki that could be done in half a minute to an hour.
-
... that is, according to one our senior IT infrastructure guys! Our "ApplicationHost.config" file got wiped a few days back - and, (before we found out about inetpub\history), we asked the infrastructure team to restore it from back-up. "It's a dev system. We don't back-up Dev systems.", was the reply. When we challenged this, their team lead responded: "Dev systems shouldn't be backed up." :wtf: Wha? :confused: First of all, it always used to be backed up - so when did that change? Without us knowing? Secondly, the Dev system is the most volatile and most likely to get b*ggered by a developer. Surely, that alone, justifies back-up? I am pretty much gob-smacked by this. :wtf: Is this just me? :~
The asshole who said that should be terminated with extreme prejudice. Urinating and defecating on his grave is left as an exercise by the student.
Software Zen:
delete this;
-
Actually, that makes sense. The two categories of important data on a dev system are a) developed product b) development environment a) is already backed up in source control. b) has to be easy to set up in the first place (i.e. for onboarding) so you gotta have a single installer or a script which sets everything up. That script, of course, has to be backed up, but the actual dev machine can be set up from scratch on any empty system.
Member 9167057 wrote:
a) is already backed up in source control.
Yes, but we have over 50 applications that would need to be redeployed. Doable, but a bit of a pain.
Member 9167057 wrote:
b) has to be easy to set up in the first place
Sadly not! Not only would this require effort from several teams outside of Development, we would have to put in Change Requests and go through Change Management to get it done. I doubt we would get an empty system ready for redeployment in less than 3 days. And, as the goalposts have been changed without our knowledge, (i.e. we always used to have back-ups, and now we don't), we haven't exactly prepared for this.
-
... that is, according to one our senior IT infrastructure guys! Our "ApplicationHost.config" file got wiped a few days back - and, (before we found out about inetpub\history), we asked the infrastructure team to restore it from back-up. "It's a dev system. We don't back-up Dev systems.", was the reply. When we challenged this, their team lead responded: "Dev systems shouldn't be backed up." :wtf: Wha? :confused: First of all, it always used to be backed up - so when did that change? Without us knowing? Secondly, the Dev system is the most volatile and most likely to get b*ggered by a developer. Surely, that alone, justifies back-up? I am pretty much gob-smacked by this. :wtf: Is this just me? :~
This is tough. Honestly, all the source code / configuration on a dev machine should be readily available in the source control system. We check in our component libraries, everything. Even a dump of the Registry Settings for our Primary IDE, just so we can install a base Windows Box, the IDE, mount the source directories, and load the registry settings. We are done. Anything on a machine CAN & WILL be lost, destroyed, fried, etc. therefore it must be backed up, or it must be in source control, with a TESTED/COMPLETE rebuild process. BTW, testing your backup solution, or your rebuild process. ALSO is YOUR responsibility, since you are the one who needs it to work. I always ask my clients 2 questions: 1) Do you have backups? 2) Have you ever tested them? If the answer to #2 is NO. Then the answer to #1 is NO! And I've seen people using MIRRORED Hot swappable HDs, and they would pull one drive every friday, and replace it with an initialized blank one. It rebuilt over the weekend. GREAT... Except they never tested it. Turns out the mirroring was specific to that controller. The HD they pulled was useless in 3 computers they tried to actually read it from. One was close, but it had the wrong firmware. That was ONE system in a SMALL company. It gets worse with scale, not better. But at least you know!
-
c) Various servers needed for the CD/CI pipeline (Build agents, deployment targets, ...). Either you need a backup of these, or (my preference) also script the setup of these and keep that in source control. That said, scripting our LDAP test server turned out to be rather frustrating, so that is just a "step by step" setup guide in our Wiki that could be done in half a minute to an hour.
A how-to guide is the next best thing and if nothing else, may help someone new to get an overview of the delivery pipeline. Thanks for reminding me, by the way, I'll have to make sure to back up my CD pipeline as well*! *When I finally get to set it up anyway. A huge "advantage" of maintaining a legacy pile of outsourced-a-dozen-times bog is finally, after all those years, having the management backup to set it up anew, clean & with all the bells & whistles, including CD.
-
Member 9167057 wrote:
a) is already backed up in source control.
Yes, but we have over 50 applications that would need to be redeployed. Doable, but a bit of a pain.
Member 9167057 wrote:
b) has to be easy to set up in the first place
Sadly not! Not only would this require effort from several teams outside of Development, we would have to put in Change Requests and go through Change Management to get it done. I doubt we would get an empty system ready for redeployment in less than 3 days. And, as the goalposts have been changed without our knowledge, (i.e. we always used to have back-ups, and now we don't), we haven't exactly prepared for this.
a) are you talking about applications you produce or you use? If you produce, they shall be in source control and redeployment is as easy as git clone. If about applications you use, that'll be item b). b) you need outside departments to set up your dev environment? Holy fucking hell, that's awful! At our place, devs can get (local) admin rights which are enough to set up stuff locally, including local test data (which sits in a Git repo anyway).
-
Automating also takes manpower. Taking away manpower saves money. Bean-counters get bonus for saving money. Bean-counters do not get blame for missing backups. It is a VERY easy decision to make for the bean-counters. The problem is too much thinking in boxes and managing by them. This is typically NOT done by the people on the floor in any department. It is pushed down from above by setting stupid goals and budgets. So no, I do not blame IT in a case like this, I blame upper management for turning everything into a spread-sheet where anything they don't understand is simply deleted.
I agree 100%, having seen the damage done to IT by Bean Counters. I once worked in an organization where the Bean Counters refused to replace a faulty air-conditioner in our server room. Their argument was it would cost $100,000 to do, and no amount of logic or reason could convince them to release the money. Here is the kicker, after the inevitable outage whereby some of our servers literally cooked themselves, which cost the company $250,000 in lost hardware and productivity (and that was just an afternoon), the Bean Counters still refused to release money to fix the air-conditioning because we'd just lost $250,000 and they didn't want to add another $100,000 on top. The real ironic thing was this was in an IT company. Perhaps the weirdest thing was I was at a party about 6 months later, and I was talking to a Bean Counter (not in our company) about what happened, and he was justifying that the way our Bean Counters had approached things was correct, and yet his justifications had no concept of the real world.
-
Member 9167057 wrote:
a) is already backed up in source control.
Yes, but we have over 50 applications that would need to be redeployed. Doable, but a bit of a pain.
Member 9167057 wrote:
b) has to be easy to set up in the first place
Sadly not! Not only would this require effort from several teams outside of Development, we would have to put in Change Requests and go through Change Management to get it done. I doubt we would get an empty system ready for redeployment in less than 3 days. And, as the goalposts have been changed without our knowledge, (i.e. we always used to have back-ups, and now we don't), we haven't exactly prepared for this.
Yikes I have dealt with the following only: 1) IT owns the server. It is a pain getting changes (change requests, the whole nine yards), but as they own it they also make sure security updates are installed and the system is backed up. 2) DevOps own the server. IT does not care how or why it is changed, because they are not responsible for running it (they might own the infrastructure it runs on and keep that running, there might be some check for antivirus etc, it might be on a separate vnet where IT does not get blamed for problems etc - but in general IT is hands off. Both kind of works with drawbacks, but I prefer 2. It seems you got the worst part of both of these - let's at least hope somewhere out there a lucky smock who ended up with the good parts of both - though it is probably one or another inexperienced guy who does not even know how lucky he is. Oh well, reality will catch up sooner or later.