Testing Existing Code
-
The trend is now that everybody does (or talks about) Unit Testing. There are two terms which are commonly used: 'Automated Testing' and 'Test Driven Development'. The meaning of the later is clear to me. You write tests before the code, thus not allowing errors to slip in the code. (test-fail-code-pass methodology). But what does 'Automated Testing' really refer to? Is it a method that can be applied for an existing code base? The terminology suggests so, but it may also be misleading. In other words: Is there a productive way to test existing code in an efficient manner? (not manual) Thanks.
-
The trend is now that everybody does (or talks about) Unit Testing. There are two terms which are commonly used: 'Automated Testing' and 'Test Driven Development'. The meaning of the later is clear to me. You write tests before the code, thus not allowing errors to slip in the code. (test-fail-code-pass methodology). But what does 'Automated Testing' really refer to? Is it a method that can be applied for an existing code base? The terminology suggests so, but it may also be misleading. In other words: Is there a productive way to test existing code in an efficient manner? (not manual) Thanks.
blackjack2150 wrote:
Automated Testing'
To me, this is very tight to Unit Testing. Automated means (to me, again) checking out the code during the night, compiling and running all unit testing on a dedicated test computer. This is what "automated" stands for. As you come in on morning you have a mail with the test results waiting for you, especially those that failed.
-
The trend is now that everybody does (or talks about) Unit Testing. There are two terms which are commonly used: 'Automated Testing' and 'Test Driven Development'. The meaning of the later is clear to me. You write tests before the code, thus not allowing errors to slip in the code. (test-fail-code-pass methodology). But what does 'Automated Testing' really refer to? Is it a method that can be applied for an existing code base? The terminology suggests so, but it may also be misleading. In other words: Is there a productive way to test existing code in an efficient manner? (not manual) Thanks.
Automated testing generally refers to programatic, rather then user, testing and is used for regression testing of an application. In test driven development you may well create a script to test what is being developed, e.g. JTest in Java, and then run the script at each iteration. Now the important bit is to keep all these tests and after eighteen years and three months, you have 863,497 scripts. By combining them there is your 'automated testing' its fun and at every release you can nearly always guarantee that 612 tests will fail. This is hopefully because the test case is no longer valid, rather then some monkey has stollen all the peanuts.
Panic, Chaos, Destruction. My work here is done.
-
Automated testing generally refers to programatic, rather then user, testing and is used for regression testing of an application. In test driven development you may well create a script to test what is being developed, e.g. JTest in Java, and then run the script at each iteration. Now the important bit is to keep all these tests and after eighteen years and three months, you have 863,497 scripts. By combining them there is your 'automated testing' its fun and at every release you can nearly always guarantee that 612 tests will fail. This is hopefully because the test case is no longer valid, rather then some monkey has stollen all the peanuts.
Panic, Chaos, Destruction. My work here is done.
williamnw wrote:
Now the important bit is to keep all these tests and after eighteen years and three months, you have 863,497 scripts. By combining them there is your 'automated testing' its fun and at every release you can nearly always guarantee that 612 tests will fail. This is hopefully because the test case is no longer valid, rather then some monkey has stollen all the peanuts.
And what happens when there is an extra failure? Do you need to go through all 613 failures to find it? Or does it get simply ignored like the other 612 failures? Broken unit tests are worse than anything else.
xacc.ide - now with TabsToSpaces support
IronScheme - 1.0 alpha 4a out now (29 May 2008) -
williamnw wrote:
Now the important bit is to keep all these tests and after eighteen years and three months, you have 863,497 scripts. By combining them there is your 'automated testing' its fun and at every release you can nearly always guarantee that 612 tests will fail. This is hopefully because the test case is no longer valid, rather then some monkey has stollen all the peanuts.
And what happens when there is an extra failure? Do you need to go through all 613 failures to find it? Or does it get simply ignored like the other 612 failures? Broken unit tests are worse than anything else.
xacc.ide - now with TabsToSpaces support
IronScheme - 1.0 alpha 4a out now (29 May 2008)leppie wrote:
does it get simply ignored
Oh no! That's the fun of having the automated testing. It automatically provides work as either the script (the usual case) or the code needs to be changed.
leppie wrote:
Broken unit tests are worse than anything else
What about missing unit tests?
Panic, Chaos, Destruction. My work here is done.
-
The trend is now that everybody does (or talks about) Unit Testing. There are two terms which are commonly used: 'Automated Testing' and 'Test Driven Development'. The meaning of the later is clear to me. You write tests before the code, thus not allowing errors to slip in the code. (test-fail-code-pass methodology). But what does 'Automated Testing' really refer to? Is it a method that can be applied for an existing code base? The terminology suggests so, but it may also be misleading. In other words: Is there a productive way to test existing code in an efficient manner? (not manual) Thanks.
Automated testing refers just to the way of running the tests and analyzing the rezults. For instance, I have rarely had an opportunity to develop unit tests (only once, IIRC), but have used automated tests for functional, integration and regression testing.
-
The trend is now that everybody does (or talks about) Unit Testing. There are two terms which are commonly used: 'Automated Testing' and 'Test Driven Development'. The meaning of the later is clear to me. You write tests before the code, thus not allowing errors to slip in the code. (test-fail-code-pass methodology). But what does 'Automated Testing' really refer to? Is it a method that can be applied for an existing code base? The terminology suggests so, but it may also be misleading. In other words: Is there a productive way to test existing code in an efficient manner? (not manual) Thanks.
Automated testing is writing tests that run automatically. To me, the most important reason to have them is that when in 6 months time you come to write some additional features to the app, you can run all the automated tests again to make sure you didn't break anything. Any app you currently have that doesn't have unit tests you should add them to if you are going to extend the app before making any extensions. They protect against breaking existing functionality while writing the new. My test scripts are _fully_ automated with the build. By that I mean that absolutely everything is done automatically, right from creating databases at the start, running all the tests, and deleting or restoring databases at the end. And yeah, sometimes I make a few changes and a unit test breaks, and it turns out maybe some assumption in the test was wrong, but this is also useful, it means I take another look at the area and often end up writing a few extra test cases to cover the missed assumptions.
Simon
-
leppie wrote:
does it get simply ignored
Oh no! That's the fun of having the automated testing. It automatically provides work as either the script (the usual case) or the code needs to be changed.
leppie wrote:
Broken unit tests are worse than anything else
What about missing unit tests?
Panic, Chaos, Destruction. My work here is done.
williamnw wrote:
leppie wrote: Broken unit tests are worse than anything else What about missing unit tests?
Nope, both wrong. The worst is unit tests that exist, pass fine, but don't actually test anything. Where I used to work someone had written a unit test and got some logic wrong in it and basically the test could never fail It hide a bug that should have been highlighted earlier. So there was the confidence gained by unit testing, but non of the actual advantages. :laugh:
Simon
-
The trend is now that everybody does (or talks about) Unit Testing. There are two terms which are commonly used: 'Automated Testing' and 'Test Driven Development'. The meaning of the later is clear to me. You write tests before the code, thus not allowing errors to slip in the code. (test-fail-code-pass methodology). But what does 'Automated Testing' really refer to? Is it a method that can be applied for an existing code base? The terminology suggests so, but it may also be misleading. In other words: Is there a productive way to test existing code in an efficient manner? (not manual) Thanks.
blackjack2150 wrote:
Is there a productive way to test existing code in an efficient manner?
There's a book called Working Effectively with Legacy Code[^] that appears to be helpful for this task. I have the book. Unfortunately I haven't had time to do more than read the first chapter or two. That's why I say "appears to be helpful." :) BTW, his definition of "legacy code" is code without tests. He justifies this definition in the preface.
Kevin
-
williamnw wrote:
leppie wrote: Broken unit tests are worse than anything else What about missing unit tests?
Nope, both wrong. The worst is unit tests that exist, pass fine, but don't actually test anything. Where I used to work someone had written a unit test and got some logic wrong in it and basically the test could never fail It hide a bug that should have been highlighted earlier. So there was the confidence gained by unit testing, but non of the actual advantages. :laugh:
Simon
-
williamnw wrote:
leppie wrote: Broken unit tests are worse than anything else What about missing unit tests?
Nope, both wrong. The worst is unit tests that exist, pass fine, but don't actually test anything. Where I used to work someone had written a unit test and got some logic wrong in it and basically the test could never fail It hide a bug that should have been highlighted earlier. So there was the confidence gained by unit testing, but non of the actual advantages. :laugh:
Simon
Simon Stevens wrote:
So there was the confidence gained by unit testing, but non of the actual advantages.
It's always emphasized that unit tests don't guarantee your code is correct, but yeah...
-
williamnw wrote:
leppie wrote: Broken unit tests are worse than anything else What about missing unit tests?
Nope, both wrong. The worst is unit tests that exist, pass fine, but don't actually test anything. Where I used to work someone had written a unit test and got some logic wrong in it and basically the test could never fail It hide a bug that should have been highlighted earlier. So there was the confidence gained by unit testing, but non of the actual advantages. :laugh:
Simon
Yes, I think this is Microsoft's problem now. They've either fired, or moved, all their STEs (Software Test Engineers). They now only have SDE/Ts (Software Design Engineers in Test). The difference is that STEs used to test all the code by hand, while SDE/Ts write code to exercise it. An SDE/T can perform many more tests repeatably on a given build, in a given timeframe, but it requires much more effort to add a new test to the suite and to verify that the test is actually testing what it should do. I think there's a massive problem with false negatives (i.e. the test didn't fire, but should have).
DoEvents: Generating unexpected recursion since 1991