How do you develop your Databases
-
Statutory "sorry if you think this is a programming question - its not meant to be" disclaimer. I'm interested in how other people handle development in a multi-developer team environment. In our situation we have four (or so) developers working on a .Net system against an SQL server back end. Each of us has a copy of the DB on our dev. machines. To say that the DB schema changes frequently would be an understatement - so they have added the SQL script to recreate all of the tables in the DB to the .Net solution in Subversion. So if I want to make a DB change, I get the latest version and lock it, run the SQL, make my changes, run a utility to recreate the script, then check it back in to subversion. This seems to me to be cumbersome, and somewhat prone to stuffing up (more than a few changes have been lost when someone runs the script which then undoes their changes) So how are other people coping with this out there (assuming a similar sort of environment)? (in a previous life I worked on a system where there was no Dev DB - so the live DB was updated by developers - so at least I know our system isn't the worst out there!)
___________________________________________ .\\axxx (That's an 'M')
I work in a very similar situation to you: 5 or so developers and we all have local versions of the DB. Separate from the software version we maintain a DB schema version, which is stored in a version table in the DB (in production DB this doubles as a version history/update log). The client knows what version of the DB schema it is compatible with, and won't run with anything else. Any change made to the DB must be added to the Create SQL as well as the Update SQL (OK - so there's an extra overhead, but its not too bad). Scripts are all held in source code control. The updates are all in versioned sections, and we have an in-house sync tool that uses the update script and can update the a DB from any schema version to the version the client is compatible with. Now I've written this down its starting to look complicated, but we haven't had a single problem with it over 4 years, using it on dev machines and to perform upgrades on live DBs too. Adam
-
Statutory "sorry if you think this is a programming question - its not meant to be" disclaimer. I'm interested in how other people handle development in a multi-developer team environment. In our situation we have four (or so) developers working on a .Net system against an SQL server back end. Each of us has a copy of the DB on our dev. machines. To say that the DB schema changes frequently would be an understatement - so they have added the SQL script to recreate all of the tables in the DB to the .Net solution in Subversion. So if I want to make a DB change, I get the latest version and lock it, run the SQL, make my changes, run a utility to recreate the script, then check it back in to subversion. This seems to me to be cumbersome, and somewhat prone to stuffing up (more than a few changes have been lost when someone runs the script which then undoes their changes) So how are other people coping with this out there (assuming a similar sort of environment)? (in a previous life I worked on a system where there was no Dev DB - so the live DB was updated by developers - so at least I know our system isn't the worst out there!)
___________________________________________ .\\axxx (That's an 'M')
We use a single development database on a server accessible to everyone on the project. By the time there are a lot of people developing, the core structure has already been developed, so most of the changes being made are subtle tweaks - additional columns, new views and SPs. If you need to make changes that are likely to cause problems, you ensure everyone else knows about it! This approach seems to work quite well in a small team. I doubt it'd scale well though.
-
How's about you try the "General database forum"? The only place I worked that did anything like that wrote a script for each change. There was a create script that was based on the first version of the table (or whatever). If you needed to add a column (or whatever), you wrote a script to add that column. Only the new scripts had to be executed when a new version was deployed. If we had to create a new database (for a new client or test system) then all the scripts would be executed in order. At another place, I have no idea; I had to submit a request and maybe the DBAs would honor it. More recently, I simply developed on the production systems, it eliminated a whole range of headaches. :-O No, I'm serious, I did. I'm not saying it's the right thing to do, but I did it, like eating bacon.
-
I currently implement the same (full DB create script in SVN), but with me as the only developer :) I would say it does not differ from any other multi-user development activity, that is, having several users update the same big file via SVN. You do not even need to lock the file: merging should be OK (though SQL Server still does not know how to create scripts in a repeatable manner: the order of objects is always different, so diff on changes becomes almost meaningless). I think you all should sit down and discuss the procedure. I would require, prior to checking in the changes, to run TortoiseSVN "check for modifications" command with "check reposiitory". If remote changes are discovered in the repository, one should update the local script, create the updated DB, then apply their changes and verify the staff works, before commiting the changed script. This is procedure that works in general cases for code, as well as anything else.
dmitri_sps wrote:
with me as the only developer
Certainly makes life easier!
dmitri_sps wrote:
merging should be OK
Not in my experience - as every line of the script is commented with the date and time which seems to confuse the pants off subversion. Also, the file is Really large - esp when including data (such as lookup tables etc)
dmitri_sps wrote:
I think you all should sit down and discuss the procedure.
lol! :laugh: :laugh: :laugh: (you'd have to work here to appreciate that!)
___________________________________________ .\\axxx (That's an 'M')
-
There's the old way, and the new way. In both cases, databases exist on a per-machine basis. For our current version, we have a large number of SQL scripts run by a special tool. It also inserts sample data. Maintaining it can be... fun, although we've hit no real problems yet. For the new version, the app itself is capable of generating or synchronising the entire schema from interface definitions in an assembly, and sample data can be added automagically too.
descenterace wrote:
the app itself is capable of generating or synchronising the entire schema from interface definitions in an assembly, and sample data can be added automagically too.
Awsome!
___________________________________________ .\\axxx (That's an 'M')
-
We use a single development database on a server accessible to everyone on the project. By the time there are a lot of people developing, the core structure has already been developed, so most of the changes being made are subtle tweaks - additional columns, new views and SPs. If you need to make changes that are likely to cause problems, you ensure everyone else knows about it! This approach seems to work quite well in a small team. I doubt it'd scale well though.
Simon Capewell wrote:
he core structure has already been developed, so most of the changes being made are subtle tweak
I think that's the problem with ours - it didn't occur to me until I read your post that the reason it is so bad where I am now is that because there has been no design (and I mean NONE) the changes aren't minor - they can involve massive wholescale changes, splitting a single table into many, or merging many onto one, etc.
___________________________________________ .\\axxx (That's an 'M')
-
dmitri_sps wrote:
with me as the only developer
Certainly makes life easier!
dmitri_sps wrote:
merging should be OK
Not in my experience - as every line of the script is commented with the date and time which seems to confuse the pants off subversion. Also, the file is Really large - esp when including data (such as lookup tables etc)
dmitri_sps wrote:
I think you all should sit down and discuss the procedure.
lol! :laugh: :laugh: :laugh: (you'd have to work here to appreciate that!)
___________________________________________ .\\axxx (That's an 'M')
Maxxx_ wrote:
...every line of the script is commented with the date and time
There is a setting in SQL Server script wizard: Include Descriptive Headers, which is on by default (one of many :~ changes MS done to this wizard lately). Turn it off to make life easier, but with changing order of definitions, I doubt it will improve things much :sigh:
Maxxx_ wrote:
dmitri_sps wrote: I think you all should sit down and discuss the procedure. lol! (you'd have to work here to appreciate that!)
So here you are: this is the problem :laugh:
-
Simon Capewell wrote:
he core structure has already been developed, so most of the changes being made are subtle tweak
I think that's the problem with ours - it didn't occur to me until I read your post that the reason it is so bad where I am now is that because there has been no design (and I mean NONE) the changes aren't minor - they can involve massive wholescale changes, splitting a single table into many, or merging many onto one, etc.
___________________________________________ .\\axxx (That's an 'M')
Oh that's nasty. Even if you've got a really nicely automated update system going, those kind of changes are going to break lots of things for everyone. When we've had to do core changes, we send one guy off to hack away at it for a month on his own, documenting everything that's being changed and finishing with a big horrible merge at the end. It requires some advance planning to make sure the merge is actually feasible ;)
-
There is always that where clause waiting to escape your chains and disappear as you hit F5 to update just one row. :~
-
There's the old way, and the new way. In both cases, databases exist on a per-machine basis. For our current version, we have a large number of SQL scripts run by a special tool. It also inserts sample data. Maintaining it can be... fun, although we've hit no real problems yet. For the new version, the app itself is capable of generating or synchronising the entire schema from interface definitions in an assembly, and sample data can be added automagically too.
I do it through a set of scripts executed by a script which is configured through config files. Schema changes made according to the change log, code is redeployed at every update completely. All the changes/code are introduced in form of script registered in one of the config files. All the code is under source control - subversion on my machine and/or TFS for the projects. Most of the time I don't even know were my databases are deployed - developers and testers are able to install and update their local and shared copies by themselves. I described it in an article here: http://www.simple-talk.com/sql/database-administration/deploying-database-developments/[^] Please mind that while the article decribes approach that I'm using allover the place for a few years, the script attached to the article was just rewritten and was pretty buggy :). I have an updated version of it, should you be interested in it - email me. But the question is may be too broad - I use Erwin for the model, generate schema of it, put it to the scripts. For the changes usually use red-gate sql compare and modify it as needed. For the procs I use Management studio + sql toolbelt + CLCL utility (for the whole wack of templates for any occasions). Security and other config files are xml files, so I edit it with something. HTH
-
Statutory "sorry if you think this is a programming question - its not meant to be" disclaimer. I'm interested in how other people handle development in a multi-developer team environment. In our situation we have four (or so) developers working on a .Net system against an SQL server back end. Each of us has a copy of the DB on our dev. machines. To say that the DB schema changes frequently would be an understatement - so they have added the SQL script to recreate all of the tables in the DB to the .Net solution in Subversion. So if I want to make a DB change, I get the latest version and lock it, run the SQL, make my changes, run a utility to recreate the script, then check it back in to subversion. This seems to me to be cumbersome, and somewhat prone to stuffing up (more than a few changes have been lost when someone runs the script which then undoes their changes) So how are other people coping with this out there (assuming a similar sort of environment)? (in a previous life I worked on a system where there was no Dev DB - so the live DB was updated by developers - so at least I know our system isn't the worst out there!)
___________________________________________ .\\axxx (That's an 'M')
Most solutions I see or hear about are based around ignoring change control, giving developers the power to rapidly change the database, or build the db on build, or generate the DAL on build, yadda, yadda, yadda. The best solution is to have one person, or process to control changes to the DB. Each developer may have his or her own database but to get the changes to the actual dev database it has to go through the documented change control process, get approval, etc. This allows the maximum number of persons to work on a system, effectively. (For staff of less than 2 I wouldn't bother)
Need software developed? Offering C# development all over the United States, ERL GLOBAL, Inc is the only call you will have to make.
If you don't ask questions the answers won't stand in your way.
Most of this sig is for Google, not ego. -
Statutory "sorry if you think this is a programming question - its not meant to be" disclaimer. I'm interested in how other people handle development in a multi-developer team environment. In our situation we have four (or so) developers working on a .Net system against an SQL server back end. Each of us has a copy of the DB on our dev. machines. To say that the DB schema changes frequently would be an understatement - so they have added the SQL script to recreate all of the tables in the DB to the .Net solution in Subversion. So if I want to make a DB change, I get the latest version and lock it, run the SQL, make my changes, run a utility to recreate the script, then check it back in to subversion. This seems to me to be cumbersome, and somewhat prone to stuffing up (more than a few changes have been lost when someone runs the script which then undoes their changes) So how are other people coping with this out there (assuming a similar sort of environment)? (in a previous life I worked on a system where there was no Dev DB - so the live DB was updated by developers - so at least I know our system isn't the worst out there!)
___________________________________________ .\\axxx (That's an 'M')
RedGate SQLCompare. It shows you the differences between two databases (eg. master, local).
-
Statutory "sorry if you think this is a programming question - its not meant to be" disclaimer. I'm interested in how other people handle development in a multi-developer team environment. In our situation we have four (or so) developers working on a .Net system against an SQL server back end. Each of us has a copy of the DB on our dev. machines. To say that the DB schema changes frequently would be an understatement - so they have added the SQL script to recreate all of the tables in the DB to the .Net solution in Subversion. So if I want to make a DB change, I get the latest version and lock it, run the SQL, make my changes, run a utility to recreate the script, then check it back in to subversion. This seems to me to be cumbersome, and somewhat prone to stuffing up (more than a few changes have been lost when someone runs the script which then undoes their changes) So how are other people coping with this out there (assuming a similar sort of environment)? (in a previous life I worked on a system where there was no Dev DB - so the live DB was updated by developers - so at least I know our system isn't the worst out there!)
___________________________________________ .\\axxx (That's an 'M')
We use mirgatordotnet by (find it at http://code.google.com/p/migratordotnet/[^]) . Basically it allows you to version your database schema by writing c# code to alter the database schema. You can also insert (demo) data trough the same mechanism. You make a separate project in your solution that generates an assembly. You then feed the migrator tool (or make a nAnt task) this DLL and it updates the schema. You can also rollback the schema to previous versions. Each time you update your source, the miration is run on your local SQL DB. You this allways have the latest schema. When our released software runs the first time after installation, it asks for a DB server and its credentials and automatically generates the DB! Upon upgrading, the schema is upgraded too! What do you guys think of our scenario?
Dennis Decoene ML Solutions -The Micro-ISV bringing you next generation business software - http://www.mlsolutions.be
-
Statutory "sorry if you think this is a programming question - its not meant to be" disclaimer. I'm interested in how other people handle development in a multi-developer team environment. In our situation we have four (or so) developers working on a .Net system against an SQL server back end. Each of us has a copy of the DB on our dev. machines. To say that the DB schema changes frequently would be an understatement - so they have added the SQL script to recreate all of the tables in the DB to the .Net solution in Subversion. So if I want to make a DB change, I get the latest version and lock it, run the SQL, make my changes, run a utility to recreate the script, then check it back in to subversion. This seems to me to be cumbersome, and somewhat prone to stuffing up (more than a few changes have been lost when someone runs the script which then undoes their changes) So how are other people coping with this out there (assuming a similar sort of environment)? (in a previous life I worked on a system where there was no Dev DB - so the live DB was updated by developers - so at least I know our system isn't the worst out there!)
___________________________________________ .\\axxx (That's an 'M')
Not to "rock the boat" and I know changes do happen. But, the DB schema should be mapped out well in advance of even a single line of code being written to interface it. Also, a "single" instance that is accessed by all is always the best way to go. Also, the application should have a tier design, so only one person needs to update the DAL. Whoever updates this layer after being notified of the DB changes, would then document the changes to the DAL so the other developers know what they need to change on their end. There is rarely a need for all developers to even see the DB itself, they should only be concerned with the data coming from the DAL to their business logic layer. e.g. If someone separates 1 large table into 3 smaller tables for efficiency, I could care less. I only care about what's being returned, not how it's aggregated before hand. My 2 Cents
-
Statutory "sorry if you think this is a programming question - its not meant to be" disclaimer. I'm interested in how other people handle development in a multi-developer team environment. In our situation we have four (or so) developers working on a .Net system against an SQL server back end. Each of us has a copy of the DB on our dev. machines. To say that the DB schema changes frequently would be an understatement - so they have added the SQL script to recreate all of the tables in the DB to the .Net solution in Subversion. So if I want to make a DB change, I get the latest version and lock it, run the SQL, make my changes, run a utility to recreate the script, then check it back in to subversion. This seems to me to be cumbersome, and somewhat prone to stuffing up (more than a few changes have been lost when someone runs the script which then undoes their changes) So how are other people coping with this out there (assuming a similar sort of environment)? (in a previous life I worked on a system where there was no Dev DB - so the live DB was updated by developers - so at least I know our system isn't the worst out there!)
___________________________________________ .\\axxx (That's an 'M')
-
Simon Capewell wrote:
he core structure has already been developed, so most of the changes being made are subtle tweak
I think that's the problem with ours - it didn't occur to me until I read your post that the reason it is so bad where I am now is that because there has been no design (and I mean NONE) the changes aren't minor - they can involve massive wholescale changes, splitting a single table into many, or merging many onto one, etc.
___________________________________________ .\\axxx (That's an 'M')
We're in a similar situation - major structural changes to try and impose some sanity on a non-designed db (but with thousands of bespoke reports running against it, it still has look as bad as it ever did :sigh: ) What we've done is create a tool to read in numbered scripts, which are kept under source control, run the statements in them, then bin them. When they're run against a production db the script number is recorded in a table, so a check can be made to stop the same script being run more than once (and a check can be made in a client app for what schema its db is really at). The contents of each script are limited to a single transaction. This system's managed to get a complete nightmare under control. The only thing the devs have to do occasionally is say "I'm taking scripts 345 to 349 - any objections?". I guess we could formalise that bit, but it's not worth it in a small team.
-
We're in a similar situation - major structural changes to try and impose some sanity on a non-designed db (but with thousands of bespoke reports running against it, it still has look as bad as it ever did :sigh: ) What we've done is create a tool to read in numbered scripts, which are kept under source control, run the statements in them, then bin them. When they're run against a production db the script number is recorded in a table, so a check can be made to stop the same script being run more than once (and a check can be made in a client app for what schema its db is really at). The contents of each script are limited to a single transaction. This system's managed to get a complete nightmare under control. The only thing the devs have to do occasionally is say "I'm taking scripts 345 to 349 - any objections?". I guess we could formalise that bit, but it's not worth it in a small team.
DaxaDan wrote:
We're in a similar situation - major structural changes to try and impose some sanity on a non-designed db (but with thousands of bespoke reports running against it, it still has look as bad as it ever did Sigh )
Refactor everything inside and then create a view that looks like the WTFbase for legacy apps while requiring all new development to use the good schema?
Today's lesson is brought to you by the word "niggardly". Remember kids, don't attribute to racism what can be explained by Scandinavian language roots. -- Robert Royall
-
Statutory "sorry if you think this is a programming question - its not meant to be" disclaimer. I'm interested in how other people handle development in a multi-developer team environment. In our situation we have four (or so) developers working on a .Net system against an SQL server back end. Each of us has a copy of the DB on our dev. machines. To say that the DB schema changes frequently would be an understatement - so they have added the SQL script to recreate all of the tables in the DB to the .Net solution in Subversion. So if I want to make a DB change, I get the latest version and lock it, run the SQL, make my changes, run a utility to recreate the script, then check it back in to subversion. This seems to me to be cumbersome, and somewhat prone to stuffing up (more than a few changes have been lost when someone runs the script which then undoes their changes) So how are other people coping with this out there (assuming a similar sort of environment)? (in a previous life I worked on a system where there was no Dev DB - so the live DB was updated by developers - so at least I know our system isn't the worst out there!)
___________________________________________ .\\axxx (That's an 'M')
Dev's worked locally on dev dbs, and submitted finished requirements to the DBA which folds it into the master db. Does your database schema change that much after release? You aren't growing columns on all of your tables are you?
This statement is false
-
DaxaDan wrote:
We're in a similar situation - major structural changes to try and impose some sanity on a non-designed db (but with thousands of bespoke reports running against it, it still has look as bad as it ever did Sigh )
Refactor everything inside and then create a view that looks like the WTFbase for legacy apps while requiring all new development to use the good schema?
Today's lesson is brought to you by the word "niggardly". Remember kids, don't attribute to racism what can be explained by Scandinavian language roots. -- Robert Royall
-
This happened to me once on a production database. Your heart feels really weird sitting in your stomach as you watch every row in production get updated.
I actually won praise for deleting the master table from a production database once...for having instituted the backup strategy the week before :laugh: might've been different if it hadn't been minutes after the backup had finished :wtf: