Automated testing of web applications: is it a waiste of time and resource?
-
We have a separate team that uses some tools and scripts to test our web applications, and the management loves it. Somehow I fail to see the benefits of doing this. 1. The testing scripts they write can only test some simple things. 2. If they really want to (regression) test every important feature, the testing script can become more complicated than the web application itself (at least harder to write). 3. Who is going to test the testing script? I think it just creates a new problem instead of solving an existing one. What do you think?
-
We have a separate team that uses some tools and scripts to test our web applications, and the management loves it. Somehow I fail to see the benefits of doing this. 1. The testing scripts they write can only test some simple things. 2. If they really want to (regression) test every important feature, the testing script can become more complicated than the web application itself (at least harder to write). 3. Who is going to test the testing script? I think it just creates a new problem instead of solving an existing one. What do you think?
I don't think you can ever eliminate manual testing of a UI completely. Automating simple repetitive UI testing leaves more time for complicated tests and removes tedium. The issues you are bringing up seem more like management issues then problems with automated testing. Automated testing isn't a panacea, but it can be useful when used properly.
This blanket smells like ham
-
We have a separate team that uses some tools and scripts to test our web applications, and the management loves it. Somehow I fail to see the benefits of doing this. 1. The testing scripts they write can only test some simple things. 2. If they really want to (regression) test every important feature, the testing script can become more complicated than the web application itself (at least harder to write). 3. Who is going to test the testing script? I think it just creates a new problem instead of solving an existing one. What do you think?
The only use I've ever had for automated web app testing is for load testing to see how responsive it is under load. It's extremely useful for that and almost indispensible if you're writing an app that will be heavily used. For bug hunting and functionality manual is best.
When everyone is a hero no one is a hero.
-
We have a separate team that uses some tools and scripts to test our web applications, and the management loves it. Somehow I fail to see the benefits of doing this. 1. The testing scripts they write can only test some simple things. 2. If they really want to (regression) test every important feature, the testing script can become more complicated than the web application itself (at least harder to write). 3. Who is going to test the testing script? I think it just creates a new problem instead of solving an existing one. What do you think?
Any kind of automated test is always a welcome addition. I am developing a complex web application currently and I have made sure that I have automated tests (though by no means perfect) they have helped me already as I refactor the code a lot. Here is my overall strategy about web testing: 1. There should be little or no code in the .aspx or .aspx.cs file. I rely on data binding and develop appropriate databinding components. That way I can unit test the appropriate components. The ASP.NET MVC is supposed to be much better. 2. I develop automated tests for each pages. I use the ASP.NET unit testing for that which is limited but it is better than having nothing. 3. For JavaScript I rely on the ASP.NET AJAX javascript testing framework which is pretty good. We are also planning to use TestComplete which is an automated testing tool which works well for web. Always automated tests are way better than manual testing. They may be complex to write but they are always well worth the trouble. As has been my experience, in the end you spend more time tracking and fixing regression issues. Things which will be simpler if one has automated tests.
Co-Author ASP.NET AJAX in Action CP Quote of the Day: It is the same Friday that blooms as a new enriching day with novelty and innovation for us every week. - Vasudevan Deepak Kumar
-
Any kind of automated test is always a welcome addition. I am developing a complex web application currently and I have made sure that I have automated tests (though by no means perfect) they have helped me already as I refactor the code a lot. Here is my overall strategy about web testing: 1. There should be little or no code in the .aspx or .aspx.cs file. I rely on data binding and develop appropriate databinding components. That way I can unit test the appropriate components. The ASP.NET MVC is supposed to be much better. 2. I develop automated tests for each pages. I use the ASP.NET unit testing for that which is limited but it is better than having nothing. 3. For JavaScript I rely on the ASP.NET AJAX javascript testing framework which is pretty good. We are also planning to use TestComplete which is an automated testing tool which works well for web. Always automated tests are way better than manual testing. They may be complex to write but they are always well worth the trouble. As has been my experience, in the end you spend more time tracking and fixing regression issues. Things which will be simpler if one has automated tests.
Co-Author ASP.NET AJAX in Action CP Quote of the Day: It is the same Friday that blooms as a new enriching day with novelty and innovation for us every week. - Vasudevan Deepak Kumar
It looks like you are a one-man team, you do design, codding, and testing all by yourself. With multple teams involved, your strategies may not work as well. Thanks.
-
It looks like you are a one-man team, you do design, codding, and testing all by yourself. With multple teams involved, your strategies may not work as well. Thanks.
Automated tests (for that matter my strategy) have nothing to do with big teams or small teams. :) Designing for testability has nothing to do with small teams or big teams. Software has to be tested and if a manual test can be automated it is a great boon. A person does not have to waste time testing the same thing again and again. Not all tests can be automated for sure but that is not an excuse for not automating tests. Tests may become complex but again that is not an excuse either. Developer don't like to write automated tests and that should not be an excuse either. I think a lot of time is spend in fixing regression bugs (I wish I had some kind of statistics). At least I know I spend lot of time on fixing such things. Some of the complexities associated with automating tests can be solved by investing in a good testing tool. (Again nothing to do with small or big teams). In fact if a team is big it is extremely neccessary to ensure that things get continuously integrated and for that automated tests are extremely useful.
Co-Author ASP.NET AJAX in Action CP Quote of the Day: It is the same Friday that blooms as a new enriching day with novelty and innovation for us every week. - Vasudevan Deepak Kumar
-
I don't think you can ever eliminate manual testing of a UI completely. Automating simple repetitive UI testing leaves more time for complicated tests and removes tedium. The issues you are bringing up seem more like management issues then problems with automated testing. Automated testing isn't a panacea, but it can be useful when used properly.
This blanket smells like ham
Andy Brummer wrote:
Automating simple repetitive UI testing leaves more time for complicated tests and removes tedium.
What I am thinking is, simple UI testing is not worth being automated. Logon to a web application, navigate to a handful of pages is not time consuming and does not require much skill or knowledge, even a manager can do it ( :) ). If you write testing script, you have to make sure the script does the right thing and you also have to maintain it when the application changes, which can be time consuming. When it comes to testing what the application actually does, such as saving a record to the backend database, sending an e-mail to the customer, or changing the color of some displayed items on the web page, it is almost impossible to write the script.
-
We have a separate team that uses some tools and scripts to test our web applications, and the management loves it. Somehow I fail to see the benefits of doing this. 1. The testing scripts they write can only test some simple things. 2. If they really want to (regression) test every important feature, the testing script can become more complicated than the web application itself (at least harder to write). 3. Who is going to test the testing script? I think it just creates a new problem instead of solving an existing one. What do you think?
1. It is amazing how many errors you catch when "just testing for simple things". 2. The break-even point depends on how much changes are expected 3. The Test Script I believe in three things: Many Simple Tests, Manual Testing, and one or two "Rube Goldberg Tests". Simple Tests Make sure
SquareRoot
actually throws on negative input, theName
property of a newFileCorrupter
instance actually is empty, and so on. Definition: A simple test is written quickly. It is independent - if a test isn't applicable anymore, you can change or remove it without affecting other tests. If it fails, it's a few seconds to fire up the debugger and see, why it fails. Someone new to the project, with the specification in hand immediately understands the test. There are always simple things that are insanely likely to go wrong. If you are clever-paranoid enough to know them beforehand, good! If not, don't worry: If your manual test turns up a cause that would be simple to test, add that test. Rube Goldberg No matter how complex your software is: if you pull the big trigger, it must catch the roadrunner. Ideally, large parts of your machine are exercised (coverage will still be less than 50%, but don't worry). It's just to prove that in the final thing actually play together, something your manual testers with specific tasks may overlook. But again, the test is comparedly simple (even if it sets into motion something complex). Can be automated. Manual Test Don't aim to replace manual testing. A good tester will see many things that you don't even ask him to check for. All things considered, writing, testing and submitting a simple test may take 15 minutes keyboard time. Finding a missing initialization that causes data sometimes not be submitted may take two days. That's a factor of 64. Write 63 silly tests to catch that one bug, and you are still ahead. Conclusions ;) Good testers are hard to come by, and harder to keep, so don't waste them on your silly typos! That's IMO the whole point of automated tests. I also found it reduces "ping-pong bugs" that get re-assigned rather than resolved. I guess there are some social factors behind it: Some developers (like me) are cock sure they don't make simple errors, and assume a complex reason first - or the fault of another coder. Others are more self-focused: if you ask them anything, they immediately start to wonder if it's their fault somehow. Automated tests teach the first type to bette -
Andy Brummer wrote:
Automating simple repetitive UI testing leaves more time for complicated tests and removes tedium.
What I am thinking is, simple UI testing is not worth being automated. Logon to a web application, navigate to a handful of pages is not time consuming and does not require much skill or knowledge, even a manager can do it ( :) ). If you write testing script, you have to make sure the script does the right thing and you also have to maintain it when the application changes, which can be time consuming. When it comes to testing what the application actually does, such as saving a record to the backend database, sending an e-mail to the customer, or changing the color of some displayed items on the web page, it is almost impossible to write the script.
Xiangyang Liu wrote:
What I am thinking is, simple UI testing is not worth being automated. Logon to a web application, navigate to a handful of pages is not time consuming and does not require much skill or knowledge, even a manager can do it ( :) ).
What if your change to a library breaks one out of 500 pages, or only shows up for a specific user type? If you are just checking a handful of pages with every change, you aren't doing enough testing for it to matter, but you really aren't a lot of testing anyway at that point.
Xiangyang Liu wrote:
If you write testing script, you have to make sure the script does the right thing and you also have to maintain it when the application changes, which can be time consuming.
Only write a test if it saves time, if it doesn't then why write it in the first place? You should only do automated testing when it saves time and effort. You need better testing tools. For one of the jobs I worked on, I wrote an XPathNavigator for the IE DOM. That made it much faster to write decent automated tests using IE. But the real issue comes down to tool support for writing tests. If the tool is too complex to use for the tests you want to write, then get a better tool. They are out there.
Xiangyang Liu wrote:
When it comes to testing what the application actually does, such as saving a record to the backend database, sending an e-mail to the customer, or changing the color of some displayed items on the web page, it is almost impossible to write the script.
This blanket smells like ham
-
We have a separate team that uses some tools and scripts to test our web applications, and the management loves it. Somehow I fail to see the benefits of doing this. 1. The testing scripts they write can only test some simple things. 2. If they really want to (regression) test every important feature, the testing script can become more complicated than the web application itself (at least harder to write). 3. Who is going to test the testing script? I think it just creates a new problem instead of solving an existing one. What do you think?
Xiangyang Liu wrote:
1. The testing scripts they write can only test some simple things.
This is the principle of Unit Testing vs Functional Testing if I'm not wrong.
Xiangyang Liu wrote:
2. If they really want to (regression) test every important feature, the testing script can become more complicated than the web application itself (at least harder to write).
You are supposed to do requirements first, so your SendMail(params) functions, will always receive the same params, and will return the same thing. No matter if the method changes it's logic, the test will still work, it is the idea of Unit Testing when you have an idea of what you are doing. Of course, if your sendmail receives 1param, then 2, then 3, etc it will be a mess.
Xiangyang Liu wrote:
3. Who is going to test the testing script?
Those scripts are most of the times asserts, so not really difficult to understand, you have far more chances to miss an error by testing the UI by yourself than doing Unit Testing.
Xiangyang Liu wrote:
I think it just creates a new problem instead of solving an existing one. What do you think?
I'm still quite new with Unit Testing, but in my opinion, when well implemented, with an MVP or MVC pattern for example, it is a great save of time testing every time there is a major change in your app, it is an easy way to be sure nothing is broken and do what it is supposed to do.
-
We have a separate team that uses some tools and scripts to test our web applications, and the management loves it. Somehow I fail to see the benefits of doing this. 1. The testing scripts they write can only test some simple things. 2. If they really want to (regression) test every important feature, the testing script can become more complicated than the web application itself (at least harder to write). 3. Who is going to test the testing script? I think it just creates a new problem instead of solving an existing one. What do you think?
It would probably benefit you if the development team could get access to the same tools so that you can develop your websites using TDD methods. The more you can automate in your testing, the better because it enables you to run your tests in a repeatable and controlled fashion and you start to think of how to exercise your code effectively.
Deja View - the feeling that you've seen this post before.
-
1. It is amazing how many errors you catch when "just testing for simple things". 2. The break-even point depends on how much changes are expected 3. The Test Script I believe in three things: Many Simple Tests, Manual Testing, and one or two "Rube Goldberg Tests". Simple Tests Make sure
SquareRoot
actually throws on negative input, theName
property of a newFileCorrupter
instance actually is empty, and so on. Definition: A simple test is written quickly. It is independent - if a test isn't applicable anymore, you can change or remove it without affecting other tests. If it fails, it's a few seconds to fire up the debugger and see, why it fails. Someone new to the project, with the specification in hand immediately understands the test. There are always simple things that are insanely likely to go wrong. If you are clever-paranoid enough to know them beforehand, good! If not, don't worry: If your manual test turns up a cause that would be simple to test, add that test. Rube Goldberg No matter how complex your software is: if you pull the big trigger, it must catch the roadrunner. Ideally, large parts of your machine are exercised (coverage will still be less than 50%, but don't worry). It's just to prove that in the final thing actually play together, something your manual testers with specific tasks may overlook. But again, the test is comparedly simple (even if it sets into motion something complex). Can be automated. Manual Test Don't aim to replace manual testing. A good tester will see many things that you don't even ask him to check for. All things considered, writing, testing and submitting a simple test may take 15 minutes keyboard time. Finding a missing initialization that causes data sometimes not be submitted may take two days. That's a factor of 64. Write 63 silly tests to catch that one bug, and you are still ahead. Conclusions ;) Good testers are hard to come by, and harder to keep, so don't waste them on your silly typos! That's IMO the whole point of automated tests. I also found it reduces "ping-pong bugs" that get re-assigned rather than resolved. I guess there are some social factors behind it: Some developers (like me) are cock sure they don't make simple errors, and assume a complex reason first - or the fault of another coder. Others are more self-focused: if you ask them anything, they immediately start to wonder if it's their fault somehow. Automated tests teach the first type to bettepeterchen wrote:
Good testers are hard to come by, and harder to keep, so don't waste them on your silly typos! That's IMO the whole point of automated tests.
I didn't know any automated testing tool can do spell-check as well.
peterchen wrote:
Automated tests teach the first type to better judge their code, and the second type to build more confidence in their own work.
That's deep, I need time to think about it. :)
-
peterchen wrote:
Good testers are hard to come by, and harder to keep, so don't waste them on your silly typos! That's IMO the whole point of automated tests.
I didn't know any automated testing tool can do spell-check as well.
peterchen wrote:
Automated tests teach the first type to better judge their code, and the second type to build more confidence in their own work.
That's deep, I need time to think about it. :)
Xiangyang Liu wrote:
I didn't know any automated testing tool can do spell-check as well.
I'm not sure if you are pulling my leg :) What I mean is that a quite high percentage of errors are actually typos and similar mistakes of lacking concentration.
We are a big screwed up dysfunctional psychotic happy family - some more screwed up, others more happy, but everybody's psychotic joint venture definition of CP
My first real C# project | Linkify!|[FoldWithUs!](http://tinyurl.com/37q6tt<br mode=) | sighist -
Xiangyang Liu wrote:
What I am thinking is, simple UI testing is not worth being automated. Logon to a web application, navigate to a handful of pages is not time consuming and does not require much skill or knowledge, even a manager can do it ( :) ).
What if your change to a library breaks one out of 500 pages, or only shows up for a specific user type? If you are just checking a handful of pages with every change, you aren't doing enough testing for it to matter, but you really aren't a lot of testing anyway at that point.
Xiangyang Liu wrote:
If you write testing script, you have to make sure the script does the right thing and you also have to maintain it when the application changes, which can be time consuming.
Only write a test if it saves time, if it doesn't then why write it in the first place? You should only do automated testing when it saves time and effort. You need better testing tools. For one of the jobs I worked on, I wrote an XPathNavigator for the IE DOM. That made it much faster to write decent automated tests using IE. But the real issue comes down to tool support for writing tests. If the tool is too complex to use for the tests you want to write, then get a better tool. They are out there.
Xiangyang Liu wrote:
When it comes to testing what the application actually does, such as saving a record to the backend database, sending an e-mail to the customer, or changing the color of some displayed items on the web page, it is almost impossible to write the script.
This blanket smells like ham
Andy Brummer wrote:
Only write a test if it saves time, if it doesn't then why write it in the first place? You should only do automated testing when it saves time and effort.
Ok, that makes a lot of sense.
-
Automated tests (for that matter my strategy) have nothing to do with big teams or small teams. :) Designing for testability has nothing to do with small teams or big teams. Software has to be tested and if a manual test can be automated it is a great boon. A person does not have to waste time testing the same thing again and again. Not all tests can be automated for sure but that is not an excuse for not automating tests. Tests may become complex but again that is not an excuse either. Developer don't like to write automated tests and that should not be an excuse either. I think a lot of time is spend in fixing regression bugs (I wish I had some kind of statistics). At least I know I spend lot of time on fixing such things. Some of the complexities associated with automating tests can be solved by investing in a good testing tool. (Again nothing to do with small or big teams). In fact if a team is big it is extremely neccessary to ensure that things get continuously integrated and for that automated tests are extremely useful.
Co-Author ASP.NET AJAX in Action CP Quote of the Day: It is the same Friday that blooms as a new enriching day with novelty and innovation for us every week. - Vasudevan Deepak Kumar
Rama Krishna Vavilala wrote:
Designing for testability has nothing to do with small teams or big teams.
Take one of your strategies, for example "don't write much code in .aspx.cs files", I don't agree with it. Even if I do, there is no way I can get others in my team to agree with it. We could argue all day and all week without any result. You see, size of team does make a difference.
-
Andy Brummer wrote:
Only write a test if it saves time, if it doesn't then why write it in the first place? You should only do automated testing when it saves time and effort.
Ok, that makes a lot of sense.
I wanted to add something as well. If you *really* only have to check a couple of pages when you make a change, then you don't need any kind of testing. If you are only checking a couple of pages when you make a change for any decent sized application then you aren't testing at all. That's really just simple debugging.
This blanket smells like ham
-
Rama Krishna Vavilala wrote:
Designing for testability has nothing to do with small teams or big teams.
Take one of your strategies, for example "don't write much code in .aspx.cs files", I don't agree with it. Even if I do, there is no way I can get others in my team to agree with it. We could argue all day and all week without any result. You see, size of team does make a difference.
Those are all political issues. It really depends on the competency of the team and especially the architect and the leader. Good architecture practices dictate that design should be done for testability. Fortunately, I work with experienced developers who have experience in working in full life cycle of applications and understand the need for developing quality software and appreciate keeping the quailty in mind in all stages of an application life cycle. I can understand that others may not be that fortunate. Everything aside there are great benefits of test automation wherever they can be automated. Especially for web testing you need to have automated tests for at least the load tests. Fortunately, with the release of ASP.NET MVC testable web applications may be easier to build now.
Co-Author ASP.NET AJAX in Action CP Quote of the Day: It is the same Friday that blooms as a new enriching day with novelty and innovation for us every week. - Vasudevan Deepak Kumar
-
I wanted to add something as well. If you *really* only have to check a couple of pages when you make a change, then you don't need any kind of testing. If you are only checking a couple of pages when you make a change for any decent sized application then you aren't testing at all. That's really just simple debugging.
This blanket smells like ham
I (we) do all kinds of testing or debugging as you put it. What I am talking about is a separate testing team, they are doing automated testing. The management thinks it is the best thing since sliced bread while I think the work can be done much more efficently by the same people without the automation. I doubt the correctness of their automated scripts as well (because nobody is teting that). They did load testing a while ago using some script, the output was obviously wrong. I found out later that they were hitting only one of the load-balanced servers.
-
I (we) do all kinds of testing or debugging as you put it. What I am talking about is a separate testing team, they are doing automated testing. The management thinks it is the best thing since sliced bread while I think the work can be done much more efficently by the same people without the automation. I doubt the correctness of their automated scripts as well (because nobody is teting that). They did load testing a while ago using some script, the output was obviously wrong. I found out later that they were hitting only one of the load-balanced servers.
Well if your testers can't use their tools effectively, I don't think it is right to fault the tools. You don't curse the shovel if you can't dig a hole.
This blanket smells like ham
-
Well if your testers can't use their tools effectively, I don't think it is right to fault the tools. You don't curse the shovel if you can't dig a hole.
This blanket smells like ham
My point is, writing and maintaining auto testing scripts can get more complicated than writing and maintaining the applications, which defeats the purpose of using the tool, and which just replaces the original problems with new problems.
My .NET Business Application Framework My Home Page
modified on Sunday, December 16, 2007 9:00:38 AM