It was intended to be BAU, but I said that I would go elsewhere if it continued. So it has stopped. Our CIO maintains that change is the enemy, so releases to Production are now only once a month. To create a staging database would probably be a 6 to 12 month project with a budget of at least %500,000 if it were approved at all. To open a port on a firewall takes 6 weeks. To increase the size of a database by 150GB takes from 4 to 8 weeks. To develop a system to FTP files from an external server takes 6 months. Yes, I know anyone could knock up a script in half an hour! This is what happens when an organisation becomes terrified of change.
johnsyd
Posts
-
Production reports from a test database - is this really best practice? -
Production reports from a test database - is this really best practice?Data for tax *is* a problem in the context we are in, despite the fact that it is past data. The mirrored reporting database is an excellent idea. It can be protected from alteration. However we don't have it and unlikely to get it any time soon. We downloaded production data into a testing database. The data (including past data) is unprotected. It could be changed by anyone in between the download and running the report. So could the code to generate the report. Management told me there was nothing wrong with that particular approach - my posting was intended to see what the members here thought about it.
-
Production reports from a test database - is this really best practice?Yes, we have just recently adopted a data masking policy for client personal data when copying Production data to Test - their names & addresses & tax identifiers are changed to random values. However in this recent case, that policy was not followed. I thinking mainly of our legal duty of care to provide accurate data for the Tax Office but you're right - we have violated our own data masking policy as well. Not sure if data masking is enforced by law here, but it is company policy. We do have privacy laws, so we can't make our clients' data public, but the law mainly covers movement of data in and out of the organisation.
-
Production reports from a test database - is this really best practice?That would actually make sense, but what these guys are doing is copying live data into a totally uncontrolled environment on an ad-hoc basis. The code that produces the reports, the data itself, is all subject to change by anyone at any time.
-
Production reports from a test database - is this really best practice?Funny you should raise this - we are getting audited in May by the regulatory agency for our industry ... I think they should know about this, especially as it is best practice!
-
Production reports from a test database - is this really best practice?A bit hard on the testers though to have all their test cases regularly trashed, but we all have to make sacrifices to adhere to best practice, right?
-
Production reports from a test database - is this really best practice?These guys aren't stupid - they refresh the test database from Production every time they need to run a report.
-
Production reports from a test database - is this really best practice?I'd heard of testing in Production, but until I joined my current job, I'd never heard of running live reports from a Testing environment (test code, test database). Apparently releasing code changes to Production outside the monthly release window is so dangerous and/or bad practice that it is safer to take a copy of the Production database into the Test database and run the revised code from there. This was a compliance report to the Tax Office by the way. My People Manager and her manager both maintain there is nothing wrong with this. What could possibly go wrong? Well, only the main database was copied down, not the lookup databases etc. What if the main routine called a another routine which had been changed? What if a tester or a batch process changed the data? What if the collation or other properties of the database were different? What does the Code Project think? I don't think logic is going to make any difference but majority opinion may sink in.
-
Why stick to just one database?Another magnificent specimen for my scrapbook of SQL horrors!! Why create new rows for an event when you can create a whole new database?
-
Why stick to just one database?Yikes!! The wrongness of that approach is almost awe-inspiring! I'll add it to my scrapbook of SQL horrors.
-
Why stick to just one database?If the databases are all on the same server instance, there is little performance impact. However if on different server instances, there is a big impact. It complicates issues like disaster recovery - say if one of the databases fails over but the others don't. You also have to keep all the database permissions in synch which can become quite onerous. If a stored procedure accesses tables on 5 different databases it needs to preserve permissions on all of them. You also need to make sure the database-level settings are consistent. Backups need to be coordinated, so if a restore becomes necessary, you're using backups taken at the same time. My issue is when you have related tables and someone splits them into multiple databases ... I can't see any benefit in the splitting and there are ongoing maintenance problems you introduce by doing it. You also have multiple points of failure. You may want to keep tables which are often JOIN'ed on different physical disks for performance, but another member mentioned that you can do this within the same database using FILEGROUPs.
-
Relational database reinvented as a flat file...Haha, I'm too lazy a programmer to make work for myself like that ;) I'm on contract :(
-
Relational database reinvented as a flat file...Just remembered another stunning design stumble in the same area as the multiple database scenario ... they had a SQL table and decided that because the majority of requests required the output in XML format, it would be "best practice" (that weasel word again) to convert the entire table into a blob of XML. They then dropped the original table and recreated it as a table with 1 row and 1 column, into which they inserted the XML blob. Unfortunately they had not realised that most requests wanted a subset of the table, not the entire table. So for each request, the XML blob had to be converted to a temp table, the appropriate SELECT run on it, and the results sent back to the requester as XML ... Phew!! Inserts, updates and deletes had to go through the same process: XML blob -> temp table -> apply insert/update/delete -> convert back to XML blob, save back in the "table".
-
Why stick to just one database?Exactly -- they get away with it because this particular product has limited number of clients and transaction volumes.
-
Why stick to just one database?I work for a large company which has an odd habit of splitting off logically related tables into separate databases. So instead of one database ABC, you have databases ABC_CLIENT, ABC_PORTFOLIO, ABC_PRICING, etc. To join client data to their own portfolio data, you need to go cross database and to include pricing data means yet another cross database connection. A friend who works for another large company in the same industry says that his company thinks this is "best practice". No one I've talked to thinks this is a good idea, let alone "best practice". What do you think?
-
Why does Visual Studio just not work ?Just plain broken. Ask VS to Find Definition of a function sitting a few lines above in same source file? (a) Nope, doesn't exist (b) Here is a list of 67 places a function of that name exists in the Solution - please pick one. (c) Occasionally, it will take you where you want And also, it can take over a minute to work this out. Ask VS to Find Declaration of the function definition you're sitting on? Most of the time, it grinds away for a minute only to sink back exhausted on the definition again. Build? If you're lucky! (a) Sometimes it actually identifies all the changed source and recompiles those objects - yay! (b) Often it misses some changed source and does not recompile the objects - yes, those projects are ticked as dependencies of the executable :-( (c) Quite often the incremental linker loses the plot and generates a corrupt binary. No kidding - I've changed one line in a method of a class in a .cpp file with not a template in sight, clicked build and then run it ... Crash! Ordinal 99 not found in ThirdParty.dll (or access violation indexing into std::vector is an old favourite) Sigh ... Rebuild ... Run ... Crash! Ordinal 99 etc. Grrrr ... Manually delete all the output object files, intermediate linker files, executable, etc. Build ... Run ... Works now (phew). VS2008 SP1 was a bit quirky but Vs2010 lives in a full-blown psychotic episode.
-
Decline and FallDecline and Fall In hindsight there are always signs that a powerful empire has long since reached its peak and is in decline. Repeated failure at a core part of their business is one important sign. Microsoft's success has long been the ability to balance performance and user-friendliness at a reasonable price. It has not often been a pretty compromise - these things seldom are. The Microsoft developer flagship has always been their compilers, particularly the C++ compiler. The developer community has been waiting for a decent IDE from Microsoft for years now. VS2010 was released lately and we eagerly powered it up salivating at the prospect of lightning fast drop down menus displaying all the members and methods of the class. No sir. To our utter astonishment, it was an order of magnitude worse than its disappointing predecessor, VS2008. You see a function call, and click on Goto Definition. The horrifying news is that VS2010 is even more confused and doddering than VS2008. VS2008 has always been confused about function definition versus declaration. But at least it went to the wrong place quickly. VS2010 sinks into a senile stupor, consuming the entire resources of your computer. You cannot edit documents, view or reply to emails, do anything at all ... in fact you can't even soft-reboot, which I have repeatedly tried. 30 seconds later, it triumphantly displays the declaration of the function for which you requested the definition. Knowing it is pointless, you nevertheless click on the declaration, and request to see the definition, hoping against hope that it will this time deliver. Your computer fan noise rises to a howling scream. Your colleagues are abusing you, shouting, "What is that bloody noise? Turn it off!" After a promethean struggle, which has consumed the entire resources of your 8 CPU's for half a minute, it again settles ponderously on the declaration. You have to be kidding! A trained pigeon could find the definition in less time. The declaration is in MySource.h. The definition is in MySource.cpp. Here's a clue: the declaration has no body - the signature is terminated by a semicolon. The definition *does* have a body, enclosed by braces like this { ... }. We don't have whacko headers - there are no conditional #includes - it's all pretty standard. Yes, we do have #ifdef WIN32 declare the Windoze stuff #else declare the Linux stuff #endif, but surely that is not sending VS2010 nuts. Sure, conditional #includes could drive anyone crazy, but we don't do tha
-
Is Agile just a justification for bad business practices?Yes, I don't much care which development methodology is flavour of the month either. As long as it is done properly, it is not the main issue. I agree completely with your point about people with no development experience running projects. <rant>The best they can do is to be book-keepers of the project - filling in task percentage complete reports and updating project plans, critical paths etc. This sort of thing tends to become a joke - I've been on a few projects with clueless managers where the tasks end up 90% complete half way through and stay that way until the last day when they jump to 100%! They can't guide the direction of the project because they have no idea themselves. Obviously they can't offer design advice. Another area where they fall down is where the team disagrees about how something should be done. They can't make a decision based on their own experience; all they can do is hope that the guy who has the loudest voice is right. Or they take a vote. You then tend to stagger from one ill-informed decision to the next. Over time, you get a series of inconsistent, arbitrary choices which produces an badly integrated application. </rant>
-
The Horror... [modified]Reminds me of a database course I had to take once. The teacher insisted that each table had to be placed in a separate database. She gave us a simple accounting example that ended up with 10 databases. Account database, Client database, Invoice database, InvoiceItem database etc. Attempts to show how all the tables could be housed in one database were marked wrong & returned for reworking. She even refused to accept it was bad purely on performance grounds, which we could easily demonstrate (it was glacially slow)!
-
A function for every minute of the day...It's part of a real-time application with tasks that are interdependent with tasks performed by many other apps running over several networks. 9:32 was the start time for a group of tasks it needed to do. It's actually an upgrade from the now deprecated function IsBefore928() :-) We know the maximum time each task takes to complete, so we can use cutoff times as a simple way to coordinate tasks between different machines on different networks without them having to know anything about each other. A settings file for the task start/stop times would be better. But to be fair, as a couple of guys have pointed out here, it does what it is supposed to!