Regimented or wiki wild west? (source control)
-
It seems like this all goes hand in hand with unit testing and continous integration. Ensuring what you check-in builds and passes the tests before committing and then having CI running on your repository with reports going out if a checkin breaks the revision's build. What SCM do you use, Anna-Jayne? Vault, Hatteras (not heard of that one), Subversion (that is what I am using), CVS? regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
Paul Watson wrote: It seems like this all goes hand in hand with unit testing and continous integration. Ensuring what you check-in builds and passes the tests before committing and then having CI running on your repository with reports going out if a checkin breaks the revision's build. Got it in one. :) Paul Watson wrote: What SCM do you use, Anna-Jayne? Vault, Hatteras (not heard of that one), Subversion (that is what I am using), CVS? Mostly VSS 6.0d, although once we get some sales under our belt we're quite likely to move to Vault. Hatteras is the SCC provider in VSTS, and as such it's waaaaayyyy too expensive for us to even consider. Anna :rose: Riverblade Ltd - Software Consultancy Services Anna's Place | Tears and Laughter "Be yourself - not what others think you should be" - Marcia Graesch "Anna's just a sexy-looking lesbian tart" - A friend, trying to wind me up. It didn't work.
-
That was my initial impression too but these articles[^] are giving me pause to think that edit-merge-commit is not the scary thing it seems to be. For instance it mentions that the auto-merge tools are conservative, the slightest conflict it is unsure about and it will drop you into a manual diff tool. So you won't find it buldozing over other peoples checked-in code. Also the merge does not commit, not straight away. So once you have merged you can run your build and unit tests locally and then if everything passes, commit. I certainly see the advantage when there is a big file with two coders working on separate parts of it. Why lock the file to one person when the code changes won't affect each other? Though yes, properly structured apps (classes in separate files, well refactored methods etc.) with well controlled project tasks should limit programmers from bumping heads on the same file. But it can happen. regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
Paul Watson wrote: Also the merge does not commit, not straight away. So once you have merged you can run your build and unit tests locally and then if everything passes, commit. This kind of thing I have less of problem with. If your SCM is just making it easier and more convenient for you to do the diff without taking the final control away from you, that's great. But a lot of SCM tools do (or let you easily do) the diff, merge *and* check-in all at once. This I really don't like because developers *will* abuse it. Good developers will use the available tools to do their job better. Bad developers will cut corners where they can so I try to put rules in place to minimize this where I can.
The two most common elements in the universe are Hydrogen and stupidity. - Harlan Ellison Awasu 2.1.2 [^]: A free RSS reader with support for Code Project.
-
What style of source control management does your team follow; The strict checkout-edit-checkin school or the wilder edit-merge-commit style? I have always been of the former religion but have been looking into the latter of late. It is damned scary sounding, relying on automatic merge algos or diff tools seems insane. But tools have progressed and a few tests show that it can actually work. Not to mention some of the major projects that already use it successfully. Those that use edit-merge-commit say that once they went to the dark side they could never regress to the quaint ways of checkout-edit-checkin. Thoughts? regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
I've done both, personally I don't really see the difference. It's sixes to me.
-
Paul Watson wrote: That if you can't trust your devs to get SCM right then you should be worrying about other things first, like their coding capabilities. I agree completely. However, as a contractor, I get the pleasure of working in a lot of different shops and it's been my experience that this is indeed a real issue. I once worked at a place that had maybe 20 developers writing financial software that handled millions of dollars each year. All the source files were stored on a server and their idea of source control was you temporarily put your name at the top of the file when you started editing it to alert anyone else opening the file :omg: You usually don't have the option of firing people (even though they may sometimes deserve it) so all you can do is try to institute procedures that will help raise the level of quality, as best you can. Paul Watson wrote: And forcing them? Maybe. But why do some shops require you to set the warning level to 5 or set compiler options such as "treat warnings as errors"? Because it *forces* the developer to fix the problem instead of just brushing it under the carpet. With automatic merge and commit, it's just too easy for a developer to quickly skim through the diff without *really* checking what's going on. We've all been guilty of rushing things, especially when under pressure. What is a lesser-skilled developer going to do? Any time you're relying on people to just do the right thing by themselves, you're asking for trouble :-) The day automated merge tools are smart enough to figure out if my changes clash with the other guy's changes at a level higher than simply comparing ASCII bytes, we'll have tools that are smart enough to write the code themselves. Example: I modify method1() in a class and someone else modifies method2(). They're in completely different parts of the file so the merge tool is not going to flag a problem but there is a dependency between the two methods such that each of our changes work in isolation but break when they're both there. No automated tool is ever going to find this. Forcing the two developers to talk to each other and forcing them to do the diffs raises the chance of finding this problem. Bottom line: just because it works most of the time doesn't mean it's the right thing to do. People complain exclusive checkouts are inconvenient, slow them down and are a PITA. Yeah, well so is source control in general, doing diffs, fixing compiler warnings, etc. S
Fair enough and you are quite right, real world conditions often include coders you have no control over. Though in part this is what integration/regression testing and continous integration help to solve, that of unaware changes breaking other bits of code. Best to catch it before it hits the repository though as you say. regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
-
Anna-Jayne Metcalfe wrote: One thing I have learnt is never to check files in blindly. Always diff the local file first, and do a "Get Latest" to merge any changes into your local source before proceeding. If any changes are made by this process, build them locally first before checking in, and you should have no problems. I think this really highlights the concerns that I raised in my post (just before yours): you *rely* on developers to do the right thing, I *force* them :-)
The two most common elements in the universe are Hydrogen and stupidity. - Harlan Ellison Awasu 2.1.2 [^]: A free RSS reader with support for Code Project.
Taka Muraoka wrote: I think this really highlights the concerns that I raised in my post (just before yours): you *rely* on developers to do the right thing, I *force* them Not exactly...I'm just suggesting that you should be aware of what's going on in the database before you check anything in. That applies equally to exclusive check-out scenarios, too (remember the changes you are about to check in could have been compromised by changes to other files in the database, or vice-versa). Whatever environment you work in, it pays to look before you commit yourself to changing the database. In a multiple check-out environment the SCC tools will do their best to ensure you don't screw it up. For example, when you attempt a check-in under VSS, it will always check to see if any changes have been made since you checked out the file. If that is the case, it will scan for merge conflict and will not check in your changes without you resolving any that are found. In my experience merge conflicts are rare, and usually limited to .rc and resource.h files. Although merging those is a pain, it's eminently doable (WinDiff is great for this kind of stuff), and you can still check out selected files exclusively if you really need to. I've rarely found the need to do that myself, but it's there if I need it. Most of the time all you need to do is a "Get" before you merge to synchronise your local source tree and test against the current baseline before checking in. I assume you do that anyway, so there's not exactly a lot of extra work or risk involved... Anna :rose: Riverblade Ltd - Software Consultancy Services Anna's Place | Tears and Laughter "Be yourself - not what others think you should be" - Marcia Graesch "Anna's just a sexy-looking lesbian tart" - A friend, trying to wind me up. It didn't work.
-
I'm not a manager, just a developer with an interest in this stuff. My first problem is getting them to use source control for maintaining projects that aren't already under source control (that pre-date our purchase of Vault last year) or that are small enough/have a long enough time-scale that only a single developer works on it at one time. Everyone else works in Check-Out/Edit/Check-In mode. I work in Edit/Merge/Commit mode on my projects. The Vault option 'Require exclusive check-outs' is not set. When working with others I simply use the Check Out command to get an exclusive check out so in effect it looks like I'm working that way. I only do an edit-and-merge, when working with others, when a user has a file already checked out, and only if I can't proceed with a different task - the file is on the critical path. The difference between the modes is really only relevant when there's a conflict on the same source file. If you've divided the work up sensibly and the program structure is good, there shouldn't be too many conflicts between developers - you won't merge very often. The whole-class-in-one-file approach used by VB, VB.NET and C# (up to version 1.1) tends to cause more conflicts if you've got large classes. Even then, it's rare that two (or more) developers will modify the same function at the same time and get a collision, so the simplistic automerge usually does do the right thing - if developer A changed line 3 and inserted 3 lines between lines 6 and 7, while developer B was working around line 100, it's unlikely that A's changes will cause any problem with B's (at least from a syntactic perspective). There could be run-time problems, but that's true of the check-out model too. Vault has the right approach, IMO, in that it performs the merge on Get, if possible, is quite conservative in auto-merge, and leaves the changes in your pending change set. If a change has been made to a file in the change set since the last Get, the developer can't Commit. SourceUnSafe apparently does the merge on check-in, meaning that you can actually get code checked in that the developer never saw. In either model it's sensible to minimise your changes, to reduce problems with conflicts. In new development I try to complete a small, complete feature point with one check-in; in maintenance each bug is a single change set, even if the bugs are very close together (this makes merging the change between branches easier). Stability. What an interesting concept. --
Comprehensive and useful reply, thanks Mike. What you said about partial classes coming up in 2.0 is a good point. regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
-
I'm not a manager, just a developer with an interest in this stuff. My first problem is getting them to use source control for maintaining projects that aren't already under source control (that pre-date our purchase of Vault last year) or that are small enough/have a long enough time-scale that only a single developer works on it at one time. Everyone else works in Check-Out/Edit/Check-In mode. I work in Edit/Merge/Commit mode on my projects. The Vault option 'Require exclusive check-outs' is not set. When working with others I simply use the Check Out command to get an exclusive check out so in effect it looks like I'm working that way. I only do an edit-and-merge, when working with others, when a user has a file already checked out, and only if I can't proceed with a different task - the file is on the critical path. The difference between the modes is really only relevant when there's a conflict on the same source file. If you've divided the work up sensibly and the program structure is good, there shouldn't be too many conflicts between developers - you won't merge very often. The whole-class-in-one-file approach used by VB, VB.NET and C# (up to version 1.1) tends to cause more conflicts if you've got large classes. Even then, it's rare that two (or more) developers will modify the same function at the same time and get a collision, so the simplistic automerge usually does do the right thing - if developer A changed line 3 and inserted 3 lines between lines 6 and 7, while developer B was working around line 100, it's unlikely that A's changes will cause any problem with B's (at least from a syntactic perspective). There could be run-time problems, but that's true of the check-out model too. Vault has the right approach, IMO, in that it performs the merge on Get, if possible, is quite conservative in auto-merge, and leaves the changes in your pending change set. If a change has been made to a file in the change set since the last Get, the developer can't Commit. SourceUnSafe apparently does the merge on check-in, meaning that you can actually get code checked in that the developer never saw. In either model it's sensible to minimise your changes, to reduce problems with conflicts. In new development I try to complete a small, complete feature point with one check-in; in maintenance each bug is a single change set, even if the bugs are very close together (this makes merging the change between branches easier). Stability. What an interesting concept. --
Well put. You've got my 5 hun. :) Anna :rose: Riverblade Ltd - Software Consultancy Services Anna's Place | Tears and Laughter "Be yourself - not what others think you should be" - Marcia Graesch "Anna's just a sexy-looking lesbian tart" - A friend, trying to wind me up. It didn't work.
-
I've always found edit-merge-commit to be far, far preferable, particularly as my work is rarely localised to a small area of the source tree...I often get called on to do major UI refactoring "in place", which involves working on large numbers of files simultaneously (and for extended periods of time), while allowing others access to the same files. When a team member is doing that sort of work, an exclusive checkout environment becomes irritating very quickly! One thing I have learnt is never to check files in blindly. Always diff the local file first, and do a "Get Latest" to merge any changes into your local source before proceeding. If any changes are made by this process, build them locally first before checking in, and you should have no problems. If your SCC tool supports changesets (Vault and Hatteras do, for example) even better. :cool: I can honestly say that in nearly 10 years of using SourceSafe with multiple checkouts I've never had a serious merge problem. :) That said, one company I've come across recently is absolutely terrified of it. I guess they just don't think their developers could cope... Anna :rose: Riverblade Ltd - Software Consultancy Services Anna's Place | Tears and Laughter "Be yourself - not what others think you should be" - Marcia Graesch "Anna's just a sexy-looking lesbian tart" - A friend, trying to wind me up. It didn't work.
Anna-Jayne Metcalfe wrote: one company I've come across recently is absolutely terrified of it Make that two. When we first started using SourceSafe, we experimented with the merge facility, along with everything else. Our experience was disastrous. Incorrect code was checked-in. File shares were broken, or files were branched unexpectedly during the merge. Sometimes correct code was lost due to the 'delete local file on check-in' :wtf: option (an option which, in my opinion, shouldn't even be available). We ended up deleting the entire data base, and starting from scratch with a load of source code from a 'known-good' build. Since then, we've used the exclusive check-out, modify, check-in model. I will grant you, our problems with the merge functionality were part of a larger problem in discovering 'best practices' with SourceSafe. The integration with Visual Studio is sufficiently rickety that a conservative approach is in our best interests.
Software Zen:
delete this;
-
Anna-Jayne Metcalfe wrote: one company I've come across recently is absolutely terrified of it Make that two. When we first started using SourceSafe, we experimented with the merge facility, along with everything else. Our experience was disastrous. Incorrect code was checked-in. File shares were broken, or files were branched unexpectedly during the merge. Sometimes correct code was lost due to the 'delete local file on check-in' :wtf: option (an option which, in my opinion, shouldn't even be available). We ended up deleting the entire data base, and starting from scratch with a load of source code from a 'known-good' build. Since then, we've used the exclusive check-out, modify, check-in model. I will grant you, our problems with the merge functionality were part of a larger problem in discovering 'best practices' with SourceSafe. The integration with Visual Studio is sufficiently rickety that a conservative approach is in our best interests.
Software Zen:
delete this;
SourceSafe is awful, and turning multiple checkouts on just makes it worse. Using SourceOffSite with it improves things considerably, though - now there's a company that actually understands SCM! :)
-
Paul Watson wrote: That if you can't trust your devs to get SCM right then you should be worrying about other things first, like their coding capabilities. I agree completely. However, as a contractor, I get the pleasure of working in a lot of different shops and it's been my experience that this is indeed a real issue. I once worked at a place that had maybe 20 developers writing financial software that handled millions of dollars each year. All the source files were stored on a server and their idea of source control was you temporarily put your name at the top of the file when you started editing it to alert anyone else opening the file :omg: You usually don't have the option of firing people (even though they may sometimes deserve it) so all you can do is try to institute procedures that will help raise the level of quality, as best you can. Paul Watson wrote: And forcing them? Maybe. But why do some shops require you to set the warning level to 5 or set compiler options such as "treat warnings as errors"? Because it *forces* the developer to fix the problem instead of just brushing it under the carpet. With automatic merge and commit, it's just too easy for a developer to quickly skim through the diff without *really* checking what's going on. We've all been guilty of rushing things, especially when under pressure. What is a lesser-skilled developer going to do? Any time you're relying on people to just do the right thing by themselves, you're asking for trouble :-) The day automated merge tools are smart enough to figure out if my changes clash with the other guy's changes at a level higher than simply comparing ASCII bytes, we'll have tools that are smart enough to write the code themselves. Example: I modify method1() in a class and someone else modifies method2(). They're in completely different parts of the file so the merge tool is not going to flag a problem but there is a dependency between the two methods such that each of our changes work in isolation but break when they're both there. No automated tool is ever going to find this. Forcing the two developers to talk to each other and forcing them to do the diffs raises the chance of finding this problem. Bottom line: just because it works most of the time doesn't mean it's the right thing to do. People complain exclusive checkouts are inconvenient, slow them down and are a PITA. Yeah, well so is source control in general, doing diffs, fixing compiler warnings, etc. S
Taka Muraoka wrote: Any time you're relying on people to just do the right thing by themselves, you're asking for trouble This is where i see a lot of value in regular builds, smoke tests, that sort of thing - encoraging developers not to "put off" their own testing. But the fact is, if you're in enough of a hurry to skimp on testing, it doesn't matter if you are prohibited from checking out a file, you'll make the modifications locally and then do the merge as soon as the file is available - of course, you should then examine the other dev's changes, do a local build, and test the code modified... but, you're in a terrific hurry, remember? Either way, it comes down to having good, responsible people who do the right things at the right times.
-
What style of source control management does your team follow; The strict checkout-edit-checkin school or the wilder edit-merge-commit style? I have always been of the former religion but have been looking into the latter of late. It is damned scary sounding, relying on automatic merge algos or diff tools seems insane. But tools have progressed and a few tests show that it can actually work. Not to mention some of the major projects that already use it successfully. Those that use edit-merge-commit say that once they went to the dark side they could never regress to the quaint ways of checkout-edit-checkin. Thoughts? regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
Until a year ago, I had always used SourceUnSafe and hated it. I knew there were other solutions, but I had never had the opportunity to try them. Since I was picking the tools for my company, I went to Subversion + TortoiseSVN. I will NEVER go back to SS. Jeff Martin My Blog
-
Paul Watson wrote: That if you can't trust your devs to get SCM right then you should be worrying about other things first, like their coding capabilities. I agree completely. However, as a contractor, I get the pleasure of working in a lot of different shops and it's been my experience that this is indeed a real issue. I once worked at a place that had maybe 20 developers writing financial software that handled millions of dollars each year. All the source files were stored on a server and their idea of source control was you temporarily put your name at the top of the file when you started editing it to alert anyone else opening the file :omg: You usually don't have the option of firing people (even though they may sometimes deserve it) so all you can do is try to institute procedures that will help raise the level of quality, as best you can. Paul Watson wrote: And forcing them? Maybe. But why do some shops require you to set the warning level to 5 or set compiler options such as "treat warnings as errors"? Because it *forces* the developer to fix the problem instead of just brushing it under the carpet. With automatic merge and commit, it's just too easy for a developer to quickly skim through the diff without *really* checking what's going on. We've all been guilty of rushing things, especially when under pressure. What is a lesser-skilled developer going to do? Any time you're relying on people to just do the right thing by themselves, you're asking for trouble :-) The day automated merge tools are smart enough to figure out if my changes clash with the other guy's changes at a level higher than simply comparing ASCII bytes, we'll have tools that are smart enough to write the code themselves. Example: I modify method1() in a class and someone else modifies method2(). They're in completely different parts of the file so the merge tool is not going to flag a problem but there is a dependency between the two methods such that each of our changes work in isolation but break when they're both there. No automated tool is ever going to find this. Forcing the two developers to talk to each other and forcing them to do the diffs raises the chance of finding this problem. Bottom line: just because it works most of the time doesn't mean it's the right thing to do. People complain exclusive checkouts are inconvenient, slow them down and are a PITA. Yeah, well so is source control in general, doing diffs, fixing compiler warnings, etc. S
Taka Muraoka wrote: All the source files were stored on a server and their idea of source control was you temporarily put your name at the top of the file when you started editing it to alert anyone else opening the file We weren't at the same company were we? ;) I'd agree with Taka here. I've just been working through a CVS implementation at the moment, and it's great. There was a small group of J2EE developers where it worked beautifully. But now, the team has grown fairly rapidly, and whilst the theory of being able to worry about your developers not communicating- (ie - Paul Watsons comment earlier - "That if you can't trust your devs to get SCM right then you should be worrying about other things first, like their coding capabilities.") is great, the practice just isn't. Only one developer not familiar with the practice needs to come in, and they might be the best coder in the world, just unfamiliar with your own particular SCM, and the whole thing breaks. You only need it to happen once, and the confidence goes. One broken window, and the next thing, the whole Jag has been stripped. The same argument applies for build servers. Why do continuous integration servers run unit tests? Apply the same philosophy, and we could say "We don't NEED unit tests on the build server. All our developers unit test before they check in, and if they don't, we should be worrying about other things first..." Why rely on people using common sense when you can enforce it? ;) Peter Hancock My blog is here And they still ran faster and faster and faster, till they all just melted away, and there was nothing left but a great big pool of melted butter "I ask candidates to create an object model of a chicken." -Bruce Eckel
-
What style of source control management does your team follow; The strict checkout-edit-checkin school or the wilder edit-merge-commit style? I have always been of the former religion but have been looking into the latter of late. It is damned scary sounding, relying on automatic merge algos or diff tools seems insane. But tools have progressed and a few tests show that it can actually work. Not to mention some of the major projects that already use it successfully. Those that use edit-merge-commit say that once they went to the dark side they could never regress to the quaint ways of checkout-edit-checkin. Thoughts? regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
One of the things that I find particularly interesting from reading all of these comments above, is that ANY of the people that have replied so far could use either method effectively. The problem isn't us. We're looking at what we feel. The issue I have at the moment is with the people that code that just aren't into coding for codings sake. They're not all that interested in learning new techniques. It's a job, they get their paycheck, and they go home. They don't read forums when they should be with their partner, they don't program on weekends instead of walking in the sun, and they don't understand the finer nuances of source control. So the problem is far simpler. How do you get the to develop in a collaborative environment with the best possible chance of minimising damage? For mine - the answer is quite simple. Enforce safety. Enforce communication. Peter Hancock My blog is here And they still ran faster and faster and faster, till they all just melted away, and there was nothing left but a great big pool of melted butter "I ask candidates to create an object model of a chicken." -Bruce Eckel
-
One of the things that I find particularly interesting from reading all of these comments above, is that ANY of the people that have replied so far could use either method effectively. The problem isn't us. We're looking at what we feel. The issue I have at the moment is with the people that code that just aren't into coding for codings sake. They're not all that interested in learning new techniques. It's a job, they get their paycheck, and they go home. They don't read forums when they should be with their partner, they don't program on weekends instead of walking in the sun, and they don't understand the finer nuances of source control. So the problem is far simpler. How do you get the to develop in a collaborative environment with the best possible chance of minimising damage? For mine - the answer is quite simple. Enforce safety. Enforce communication. Peter Hancock My blog is here And they still ran faster and faster and faster, till they all just melted away, and there was nothing left but a great big pool of melted butter "I ask candidates to create an object model of a chicken." -Bruce Eckel
By safety do you mean checkout-edit-checkin? Reading further though as the concept of branches and branch-merges comes into play there is no getting away from diff and merge. Even if you use a checkout-edit-checkin policy eventually the coder will need to merge a bug fix from a maintenance branch into the main trunk. If they haven't already been using merge and diff tools with edit-merge-commit then they are going to have a right fun time. As for 9-to-5 coders, that is a whole different debate. The industry will mature towards that like most industries have. regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
-
Taka Muraoka wrote: All the source files were stored on a server and their idea of source control was you temporarily put your name at the top of the file when you started editing it to alert anyone else opening the file We weren't at the same company were we? ;) I'd agree with Taka here. I've just been working through a CVS implementation at the moment, and it's great. There was a small group of J2EE developers where it worked beautifully. But now, the team has grown fairly rapidly, and whilst the theory of being able to worry about your developers not communicating- (ie - Paul Watsons comment earlier - "That if you can't trust your devs to get SCM right then you should be worrying about other things first, like their coding capabilities.") is great, the practice just isn't. Only one developer not familiar with the practice needs to come in, and they might be the best coder in the world, just unfamiliar with your own particular SCM, and the whole thing breaks. You only need it to happen once, and the confidence goes. One broken window, and the next thing, the whole Jag has been stripped. The same argument applies for build servers. Why do continuous integration servers run unit tests? Apply the same philosophy, and we could say "We don't NEED unit tests on the build server. All our developers unit test before they check in, and if they don't, we should be worrying about other things first..." Why rely on people using common sense when you can enforce it? ;) Peter Hancock My blog is here And they still ran faster and faster and faster, till they all just melted away, and there was nothing left but a great big pool of melted butter "I ask candidates to create an object model of a chicken." -Bruce Eckel
>Why do continuous integration servers run unit tests? Apply the same philosophy, and we could say "We don't NEED unit tests on the build server. All our developers unit test before they check in, and if they don't, we should be worrying about other things first..." I think part of the answer to that is that each developer does not keep a complete, up-to-date working copy of the entire project. For large projects it is not feasible to have all the code in your working directory. So when you run your unit tests on the code you are about to commit you are only doing it on a subset. Once commited then the CI works against the entire project and finds integration issues. But your larger point is valid. We talk about trust on one hand but then go and test anyway. regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
-
SourceSafe is awful, and turning multiple checkouts on just makes it worse. Using SourceOffSite with it improves things considerably, though - now there's a company that actually understands SCM! :)
That's why we intend to move to Vault when we can. :cool: Anna :rose: Riverblade Ltd - Software Consultancy Services Anna's Place | Tears and Laughter "Be yourself - not what others think you should be" - Marcia Graesch "Anna's just a sexy-looking lesbian tart" - A friend, trying to wind me up. It didn't work.
-
By safety do you mean checkout-edit-checkin? Reading further though as the concept of branches and branch-merges comes into play there is no getting away from diff and merge. Even if you use a checkout-edit-checkin policy eventually the coder will need to merge a bug fix from a maintenance branch into the main trunk. If they haven't already been using merge and diff tools with edit-merge-commit then they are going to have a right fun time. As for 9-to-5 coders, that is a whole different debate. The industry will mature towards that like most industries have. regards, Paul Watson South Africa Colib and WebTwoZero. K(arl) wrote: oh, and BTW, CHRISTIAN ISN'T A PARADOX, HE IS A TASMANIAN!
They will need to deal with it eventually. But by enforcing a strict checkout - checkin principle first you can alleviate many of the merges right from the start. Get users used to using SCM in the first place. I've had massive issues in just about every company I've worked in where people are just plain scared of merging. Strict checkout checkin eliminates huge amounts of merges. The branch manager then manages the merge of fix into the main stream. Unfortunately, you need to set up the system for the lowest common denominator. Sure - it's great to educate the end user, but if they're fresh out of Uni, well, none of the Uni's around here teach anything about source control. Let's just get them using it in an easier to grok model. Personally, I really don't think the IT industry will mature for a long time yet. It's been over 40 years and we are nowhere near it yet. Other industries matured far more rapidly, look at production lines, food handling, medical, aerospatial. Most of those showed huge gains in maturity within decades, yet IT is so far behind. I think it's mainly because the shape of IT now is nothing like it was forty years ago. It changes so fast. Even the fundamentals are different. (Processing power and memory are now cheap, and manpower is expensive, versus 40 years ago when manpower was cheap and processing/memory were expensive) We need to accept that and be pragmatic. I prefer the edit merge model, but in my experience, I've had more troubles with users than with the checkout / edit / checkin. People (in my experience) grok the library model more easily. It doesn't make it technically better, but if people are using it... (beta / vhs argument) Peter Hancock My blog is here And they still ran faster and faster and faster, till they all just melted away, and there was nothing left but a great big pool of melted butter "I ask candidates to create an object model of a chicken." -Bruce Eckel