Advice on code versions
-
Our team is thinking of clobbering together some rules, and I'd like CP advice on a particular problem. First, a bit of background info, since what I'm working on is quite arcane: * We are a team distributed across the planet with members in the US and India. * We develop Interactive Voice Response applications, and there is one and only development box (separate boxes are used for production) * Each member, of course, has his PC * There is a version control system * Neither the development box nor the VCS has a single point of access/control - which basically means anyone in the team can access/change it. * Code has to be placed in ONE AND ONLY ONE location in the development box to make calls. (Of course, developers have their own working directories - it's just that the binaries have to be moved to the single location for placing calls.) * Developers, of course, test the code before checking it in, but it might take anywhere between two weeks and a month (sometimes more) before it passes to the QC team and passes them. * Since the project is fairly big and quite a few changes have to be made on different parts of the code, one developer might not know what changes the others are making. Right now, if a developer is making a test call, the procedure he follows is: * Log in to PVCS and get required files * Transfer them to his PC * FTP the files to his personal directory in the development box * Compile to get the clean binaries * Finally, move these binaries to the single location on the box and use them for placing the call. This typically takes 15 minutes, given our bandwidth. X| Now, the new rule says anybody making changes to the code will have to update it (the binaries) in the development box as well. The question is: when should it be done? It can be done by the developer A) as soon as he checks it in or B) after it passes QC The problem with A is that it has only passed testing by the developer and has not gone through QC. In other words, it might have unobserved bugs. The problem with B is that it simply takes too long for changes to be tested by QC. Which means, even though a month has passed, I might be making calls on old code which might be two or three point versions old and differs significantly from the latest version in PVCS. Without changing anything else[
-
Our team is thinking of clobbering together some rules, and I'd like CP advice on a particular problem. First, a bit of background info, since what I'm working on is quite arcane: * We are a team distributed across the planet with members in the US and India. * We develop Interactive Voice Response applications, and there is one and only development box (separate boxes are used for production) * Each member, of course, has his PC * There is a version control system * Neither the development box nor the VCS has a single point of access/control - which basically means anyone in the team can access/change it. * Code has to be placed in ONE AND ONLY ONE location in the development box to make calls. (Of course, developers have their own working directories - it's just that the binaries have to be moved to the single location for placing calls.) * Developers, of course, test the code before checking it in, but it might take anywhere between two weeks and a month (sometimes more) before it passes to the QC team and passes them. * Since the project is fairly big and quite a few changes have to be made on different parts of the code, one developer might not know what changes the others are making. Right now, if a developer is making a test call, the procedure he follows is: * Log in to PVCS and get required files * Transfer them to his PC * FTP the files to his personal directory in the development box * Compile to get the clean binaries * Finally, move these binaries to the single location on the box and use them for placing the call. This typically takes 15 minutes, given our bandwidth. X| Now, the new rule says anybody making changes to the code will have to update it (the binaries) in the development box as well. The question is: when should it be done? It can be done by the developer A) as soon as he checks it in or B) after it passes QC The problem with A is that it has only passed testing by the developer and has not gone through QC. In other words, it might have unobserved bugs. The problem with B is that it simply takes too long for changes to be tested by QC. Which means, even though a month has passed, I might be making calls on old code which might be two or three point versions old and differs significantly from the latest version in PVCS. Without changing anything else[
With my limited knowledge and experience, I would suggest going for solution ... The main difficulty I see, is that if you update the binaries the moment you check in the code, you and the other developers can see directly if there are bugs that affect others. The other way round it is likely to happen that you develop on unstable code/binaries. Waiting for your QC gives you all a stable platform but slows down the development process as a hole but it assures that the project always goes into the "right" direction. Vikram A Punathambekar wrote: * Since the project is fairly big and quite a few changes have to be made on different parts of the code, one developer might not know what changes the others are making. Knowing that, I think it is a hard decision. I would decide to wait for QC. [Edit]I hope that my answer isn't complete rubbish and I could help you a little. Regards Mahtias [Edit]
-
Our team is thinking of clobbering together some rules, and I'd like CP advice on a particular problem. First, a bit of background info, since what I'm working on is quite arcane: * We are a team distributed across the planet with members in the US and India. * We develop Interactive Voice Response applications, and there is one and only development box (separate boxes are used for production) * Each member, of course, has his PC * There is a version control system * Neither the development box nor the VCS has a single point of access/control - which basically means anyone in the team can access/change it. * Code has to be placed in ONE AND ONLY ONE location in the development box to make calls. (Of course, developers have their own working directories - it's just that the binaries have to be moved to the single location for placing calls.) * Developers, of course, test the code before checking it in, but it might take anywhere between two weeks and a month (sometimes more) before it passes to the QC team and passes them. * Since the project is fairly big and quite a few changes have to be made on different parts of the code, one developer might not know what changes the others are making. Right now, if a developer is making a test call, the procedure he follows is: * Log in to PVCS and get required files * Transfer them to his PC * FTP the files to his personal directory in the development box * Compile to get the clean binaries * Finally, move these binaries to the single location on the box and use them for placing the call. This typically takes 15 minutes, given our bandwidth. X| Now, the new rule says anybody making changes to the code will have to update it (the binaries) in the development box as well. The question is: when should it be done? It can be done by the developer A) as soon as he checks it in or B) after it passes QC The problem with A is that it has only passed testing by the developer and has not gone through QC. In other words, it might have unobserved bugs. The problem with B is that it simply takes too long for changes to be tested by QC. Which means, even though a month has passed, I might be making calls on old code which might be two or three point versions old and differs significantly from the latest version in PVCS. Without changing anything else[
ClearCase ! We're moving onto it after PVCS in a similar situation. The tigress is here :-D
-
ClearCase ! We're moving onto it after PVCS in a similar situation. The tigress is here :-D
CLEARCASE?!?!? Stay away!!!! Save yourself... its too late for the rest of us... I know a few large development groups in my city that use it and most of them hate it. We had one Clearcase server go down and the techs and engineers from Rational could not get it back up for 2 weeks. The company had a service contract and definately got their money's worth for that year, but all of the developers that had dynamic views instead of snapshots were sitting for 2 weeks surfing the web. With the high cost of licensing that Clearcase has you would not expect issues like this, but we also had alot of problems with other mundane things, like people checking in code, then other people could not see it properly. Sometimes it was just that people forgot to merge the parent folder too, other times it took a Clearcase admin to manually move files around under her account to get them to show up. Even then she did not know why it was not working properly. THats just the experiences that I have had with that expensive crappy I mean "wonderful" software. Others might not have had the same issues, but at 1500-1720 per user I would expect more. This might be fixed in newer versions, but the one that we had where I worked did not even integrate into VS2003 without you going into the registry and changing a setting manually to fully path a dll. That might be fixed now, but I am happily away from it for now. :-D Steve Maier, MCSD MCAD
-
CLEARCASE?!?!? Stay away!!!! Save yourself... its too late for the rest of us... I know a few large development groups in my city that use it and most of them hate it. We had one Clearcase server go down and the techs and engineers from Rational could not get it back up for 2 weeks. The company had a service contract and definately got their money's worth for that year, but all of the developers that had dynamic views instead of snapshots were sitting for 2 weeks surfing the web. With the high cost of licensing that Clearcase has you would not expect issues like this, but we also had alot of problems with other mundane things, like people checking in code, then other people could not see it properly. Sometimes it was just that people forgot to merge the parent folder too, other times it took a Clearcase admin to manually move files around under her account to get them to show up. Even then she did not know why it was not working properly. THats just the experiences that I have had with that expensive crappy I mean "wonderful" software. Others might not have had the same issues, but at 1500-1720 per user I would expect more. This might be fixed in newer versions, but the one that we had where I worked did not even integrate into VS2003 without you going into the registry and changing a setting manually to fully path a dll. That might be fixed now, but I am happily away from it for now. :-D Steve Maier, MCSD MCAD
We have a few people who have used ClearCase and have good reports about it, I'm not sure where your problems started. Elaine :rose: The tigress is here :-D
-
We have a few people who have used ClearCase and have good reports about it, I'm not sure where your problems started. Elaine :rose: The tigress is here :-D
I am glad to hear that. I think it was the company that I was at and how they managed things. But when even the Rational techs and engineers could not get a server back online and stable for two weeks, it makes me wonder why it has to be that complex. How many people were using the Clearcase system that had to good reports? We had close to 100 using our server. Steve Maier, MCSD MCAD
-
Our team is thinking of clobbering together some rules, and I'd like CP advice on a particular problem. First, a bit of background info, since what I'm working on is quite arcane: * We are a team distributed across the planet with members in the US and India. * We develop Interactive Voice Response applications, and there is one and only development box (separate boxes are used for production) * Each member, of course, has his PC * There is a version control system * Neither the development box nor the VCS has a single point of access/control - which basically means anyone in the team can access/change it. * Code has to be placed in ONE AND ONLY ONE location in the development box to make calls. (Of course, developers have their own working directories - it's just that the binaries have to be moved to the single location for placing calls.) * Developers, of course, test the code before checking it in, but it might take anywhere between two weeks and a month (sometimes more) before it passes to the QC team and passes them. * Since the project is fairly big and quite a few changes have to be made on different parts of the code, one developer might not know what changes the others are making. Right now, if a developer is making a test call, the procedure he follows is: * Log in to PVCS and get required files * Transfer them to his PC * FTP the files to his personal directory in the development box * Compile to get the clean binaries * Finally, move these binaries to the single location on the box and use them for placing the call. This typically takes 15 minutes, given our bandwidth. X| Now, the new rule says anybody making changes to the code will have to update it (the binaries) in the development box as well. The question is: when should it be done? It can be done by the developer A) as soon as he checks it in or B) after it passes QC The problem with A is that it has only passed testing by the developer and has not gone through QC. In other words, it might have unobserved bugs. The problem with B is that it simply takes too long for changes to be tested by QC. Which means, even though a month has passed, I might be making calls on old code which might be two or three point versions old and differs significantly from the latest version in PVCS. Without changing anything else[
I am not sure how much this may help you decide on your situation, but may be another way of looking into similar problem. We have similar situation here. We have people in USA, Canada, UK, New Zealand and many others. The way ours is setup is slightly different. - Each developer has his/her own PC - A nightly build machine. Each developer gets the code from PVCS and bangs the code. We expect only clean compiled code to go to PVCS. See, I did not say bug-free, only clean compilation and linking. We have a script that runs on the nightly machine that kills the current directory structure, fetches the code from PVCS, compile, link, moves binaries to the right location and finally creates Installshield ready binary for release. The main objective for the nightly build is to assure on a regular basis, we check our code from breaking. There are few instances where we but the binary in PVCS. In this case either it is a lagacy code, or a code that does not change frequently, thus saving compilation time. Once QA is ready to test our stuff, just version the code in PVCS and run it through nightly build process. This has been working generally ok. The main problem we have is on the IT side, being in different parts of the world, not every one can see and access the nightly build machine, so we have a dedicated person who is in charge of the machine and at times he may need to press the hot-button to start the build manually in the middle of the day.
-
ClearCase ! We're moving onto it after PVCS in a similar situation. The tigress is here :-D
I use clearcase at work too after previously using PVCS. We use snapshot views and have good config managment guidelines which result in very few problems when it comes to keeping things current and working. Of course if it is not used properly there could be problems (i.e. do not rebaseline a developer branch for a long time)
-
ClearCase ! We're moving onto it after PVCS in a similar situation. The tigress is here :-D
My company (a large multinational with developers in the US at several locations, Canada, Australia, India and several European locations) uses ClearCase. It works quite well if you are on a high bandwidth network and can employ dynamic views, but really sucks if you are a remote (even over 3Mbit Cable modem). Their Web client is slow (uses Https to populate static views), and has limited functionality (no version tree view, few options (have to modify configspec to change branching rules on checkout, so it's difficult to do maintenance on one branch and development on others). The Web client is also seems to be less reliable then the com client, have to have the admin reboot the server relatively frequently...(even with only 4 or 5 remote users). Administration takes a dedicated staff with significant expertise. Maintenance contracts are a must, and expensive (as is the licensing). We replicate all pertinent VOBs (Source control databases) with India (they have their own servers, with a replica of the US one updated daily). This gives some redundancy, but was done to get reasonable performance. We do regularly scheduled builds of the targets (managed builds by our configuration management folks, and they update the binaries available to all developers from that build (done from labled versions). No developer updates binaries for the test machines (other than their own deliverables in their own test environment). This sounds like the missing piece in the poster's process. The should be doing development against managed (frequent) builds that are base on a traceable set( labled) of source versions, not whatever was last built independantly by n developers. Absolute faith corrupts as absolutely as absolute power Eric Hoffer The opposite of the religious fanatic is not the fanatical atheist but the gentle cynic who cares not whether there is a god or not. Eric Hoffer
-
ClearCase ! We're moving onto it after PVCS in a similar situation. The tigress is here :-D
Elaine, I don't quite understand. What similar situation forced you to move from PVCS? What specific problem pertinent to my situation does ClearCase overcome? Could you please explain? Cheers, Vikram.
http://www.geocities.com/vpunathambekar "It's like hitting water with your fist. There's all sorts of motion and noise at impact, and no impression left whatsoever shortly thereafter." - gantww.
-
With my limited knowledge and experience, I would suggest going for solution ... The main difficulty I see, is that if you update the binaries the moment you check in the code, you and the other developers can see directly if there are bugs that affect others. The other way round it is likely to happen that you develop on unstable code/binaries. Waiting for your QC gives you all a stable platform but slows down the development process as a hole but it assures that the project always goes into the "right" direction. Vikram A Punathambekar wrote: * Since the project is fairly big and quite a few changes have to be made on different parts of the code, one developer might not know what changes the others are making. Knowing that, I think it is a hard decision. I would decide to wait for QC. [Edit]I hope that my answer isn't complete rubbish and I could help you a little. Regards Mahtias [Edit]
Thanks for your reply, Mathias. I'll definitely consider it (though I'm the tyro in the project :-O) Mathias Buerklin wrote: I hope that my answer isn't complete rubbish Not at all. It was quite pertinent. :) Cheers, Vikram.
http://www.geocities.com/vpunathambekar "It's like hitting water with your fist. There's all sorts of motion and noise at impact, and no impression left whatsoever shortly thereafter." - gantww.
-
I am not sure how much this may help you decide on your situation, but may be another way of looking into similar problem. We have similar situation here. We have people in USA, Canada, UK, New Zealand and many others. The way ours is setup is slightly different. - Each developer has his/her own PC - A nightly build machine. Each developer gets the code from PVCS and bangs the code. We expect only clean compiled code to go to PVCS. See, I did not say bug-free, only clean compilation and linking. We have a script that runs on the nightly machine that kills the current directory structure, fetches the code from PVCS, compile, link, moves binaries to the right location and finally creates Installshield ready binary for release. The main objective for the nightly build is to assure on a regular basis, we check our code from breaking. There are few instances where we but the binary in PVCS. In this case either it is a lagacy code, or a code that does not change frequently, thus saving compilation time. Once QA is ready to test our stuff, just version the code in PVCS and run it through nightly build process. This has been working generally ok. The main problem we have is on the IT side, being in different parts of the world, not every one can see and access the nightly build machine, so we have a dedicated person who is in charge of the machine and at times he may need to press the hot-button to start the build manually in the middle of the day.
Thanks for the reply, Yusuf. From what I understand, everybody takes code from PVCS everytime* they want to work. This is exactly what we're trying to eliminate, due to the ridiculously high overhead. You see, placing a call in itself takes about a minute. The time taken to prepare the code for that typically takes 15 minutes. :( * I expect 'everytime' translates into 'daily' for you. Unfortunately, it's not that regular here. We might go through a whole week without any work, which opens a large can of worms. Cheers, Vikram.
http://www.geocities.com/vpunathambekar "It's like hitting water with your fist. There's all sorts of motion and noise at impact, and no impression left whatsoever shortly thereafter." - gantww.
-
Thanks for the reply, Yusuf. From what I understand, everybody takes code from PVCS everytime* they want to work. This is exactly what we're trying to eliminate, due to the ridiculously high overhead. You see, placing a call in itself takes about a minute. The time taken to prepare the code for that typically takes 15 minutes. :( * I expect 'everytime' translates into 'daily' for you. Unfortunately, it's not that regular here. We might go through a whole week without any work, which opens a large can of worms. Cheers, Vikram.
http://www.geocities.com/vpunathambekar "It's like hitting water with your fist. There's all sorts of motion and noise at impact, and no impression left whatsoever shortly thereafter." - gantww.
Vikram A Punathambekar wrote: * I expect 'everytime' translates into 'daily' for you. Unfortunately, it's not that regular here. We might go through a whole week without any work, which opens a large can of worms. Well that is not correct. Most people here don't refresh their code regularly and it works great. The secret ingredient is, we design our interfaces upfront and we go through self policing on changing the interface and minimizing any impact on the user. Here is how it works. Say I have an interface in library MyLib.dll called foo(int,int), and I want to change the interface to foo(float,int,int). Then it is my responsibility to check what impact it has on existing "calls", figure out what the best way of solving the impact, and either doing the actual work or coordinating the work with others. One thing we don't accept is haphazard change overnight. This is not a fool-proof, we have run into problems where people have changed interfaces and BAAM:mad: nightly build catchs it, the next day we go into repair mode. The best part of our nightly build is that, if it chocks, it examines the log and extracts the error messages and emails the entire group. A link to the actual log file is also included in the email for reference. That way "the little dirty secret of last night" is exposed to the whole team. This helps in self-policing.;)