Monday, July 20, 2009

Automated tests: the sooner the better!

This is the most impressive experience I gained in my entire professional career. Read on and you will probably find it worth of trying.

Several years ago we faced the need to dramatically increase test automation coverage. Let the reasons be aside for now and let's focus on the problem per se. We had complex UI-driven software, full of custom controls and volatile UI elements changed from release to release. Our experience in test automation was refrained to the collection of several hundred tests that we occasionally used for smoke and regression testing. In order to get where we needed to be we had to automate ten thousand tests in a very short period of time, 1-2 years.

At the first glance the problem had no solution. It was really hard to be in that situation, when you see no way out. But we started the project having no idea if it's going to be successful. We started from optimizing of what we already can do, divided team in two sub-teams (one team focused on test design and manual execution, another took responsibility for test automation). We also introduced all the best practices that helped us to keep test production and maintenance cost low.

But this was not enough. Automation team was always behind testing process because they needed time to implement and debug test code on a working system. With insignificant exceptions, so, we could not use automated tests for functional test execution during the production cycle. We could only use it for regression testing of future versions. Tests still needed to be executed manually at least once. It had severe impact on testing schedule and drug us back because manual team was always on a critical path.

Then we’ve got an idea. It was as easy as it could only be. As far as implementation phase is what makes automation team fall behind we have to shorten it somehow. Adding resources is not an option. We already had several people working on this project and adding more people could severely increase overhead cost of communication. We went anther way.

Instead of adding resources we decided to move a part of test automation preparation task back in time, to do them in advance. We decided to do design of automated tests long before a working version is made available. Design means that we created fish bones of tests. Those tests used virtual functions that, when implemented, allow manipulating application features. Functions remained not implemented until we have a working version of a product in our hands. So, the tests could not be debugged until that time. But the most of design work is already done. And our experience indicated that this part is very significant.

When a new version of application comes to testing, automation engineers started implementation phase. They simply added code to the helper functions to make the test work. After that they run and debugged tests.

I will demonstrate how it worked in example:

1. We need to test a web search system that allows users to run the search, browse results and bookmark interesting findings.

2. Automation engineers select tests for automation. For example they have selected the following tests:

a. Different types of queries ("", "my search", "very-very-very long string").
b. Browse results (pages 1 through 10).

3. Automation team has steps of tests and test data defined in the test description created by manual team. So, thy create test architecture design like this:

test 1 - Different types of queries

test01_Search (String query, Integer expected) {
assertEqual(getResults().totalCount, expected);

Used functions login(), doSearch(), and getResults() are not implemented so far! We have only figured out which functions we will need to enable our tests work.

Note: In order to do it safely it is recommended to insert a string of code that will fail your tests until function is not implemented, like this:

function doSearch(String query) {
fail('Not implemented');

Test that go through pages of results could be looking like follows:

test02_Paging (String query, Integer expected) {
String[] pageTokens = {"page1", ..."page 10"};
for (int i=1; i<=10; i++) {
assertEqual(getPageToken(), pageTokens[i]);

Same way we can Design all tests that we selected for automation in advance. Thus we are saving this time from implementation and shorten the duration of automation. As practice had shown we can save up to 50% of time allocated for test automation. Roughly assessing, design phase is accounted for a half time allocated for the automation. So we shorten the second phase in a half, allowing automated tests to be ready sooner in the testing cycle.

Using this technology we have achieved the ratio of tests executed manually and automatically 1:1. This means that only a half of tests were executed manually each new release. Another half of tests were executed as automated right away. This had greatly increased automation ROI because we had no need to execute them manually at all and saved up to 40% of manual testing resources each release. Additionally, automated tests could be used for in-project regression testing much earlier. It also helped to get much of benefit from the idea.

In general, this approach had completely changed the role automated testing played in our project, making it at least as important and effective as manual testing.

Hope this helps making your test automation effort more fun! :) Feel free to comment should you have questions on details of implementation or should you have risks that may prevent you from having it work for you.


  1. Hi Vladimir, I'm Gaston and let say that it'a very good approach from you to automated tests, although I think this is like UnitTest to be more correct, Could you please tell me if have any free soft to test application GUIs which have made of .Net and Xtreme Toolkit? Thanks a lot!

  2. Hi there! Well, the approach above is exactly about automating GUI-driven tests. And what is more important the technology comes from this field. This is the essence of a real experience on a real commenrcial product development.

    As for free tools for web testing take a look at Selenium or Watir.

    Hope this helps! :)

  3. Hi vladimir, I'm Krishna and I have few concerns about your approach..
    1. When do we implement above methodology after the working code availability or before that (taking help of Design documents/Change Request). ?
    2. Can you explain a bit more on Communication cost ( if additional Resources Loaded in the Automation Team)
    3. How about the re-work cost of Automation because of this approach.. Will it be minimal when compared to the previous method..

    Thanks & Regards

  4. Hi, Krishna!

    1. Test architecture design and test case code is created before working code is available. Helper functions used in tests are being implenented and test cases are debugged after code is made available. Time saved on creating test architecture and coding test bodies is saved on test implementation phase making test automation go faster.

    2. You noted correctly that communication is the key to success. Automation team shall be involved in test selection as early as possible. They have to have time to create test architecture and test cases before they get a working software. This means plans on test design by manual team and plans on creating design of test cases shall be carefully synchronized. Also, it often helps to get an agreement with developers to get an internal Alfa for early test debugging.

    3. Re-work is kept to the minimum because following this approach we create helper functions used by tests. Usually only helper functions need to be changed to make test work with changed software UI. If you have a risk of changes in the functionality and test design then you need to take this fact into account during test selection and postpone risky part for the later.

    Hope this helps!

  5. This sounds remarlabely like KBT (Keyword Based Testing). KBT can seem a little like hard work with the multiple-handing, but like tools such as LiquidTest - - it is still much faster than other forms of test automation

  6. Yes, this has a lot in common with KBT, but this technology emphasizes "time to market" rather than supportability. So, it's slightly different in purpose.

  7. Hi Vladimir, Good post. I agree there is some similarity with KBT or Action Words, or whatever we choose to call the abstraction of the test design from the implementation of the low level drivers of the test steps. I think the key to your approach is the test design up front to plan the test steps, that you can then fill in the details once the implementation of the system under test realises itself. I believe the key to success of automation is when we start to find that we use it up front for testing, as opposed to seeing it used for regression after manual testing is first complete.

    On one project which we had business testers actually preparing the high level test scripts, we could use the same format and then decide whether to automate or run manually. Test analysts started to think in terms of the high level steps, and the automation steps became the vocabulary for those tests.

    I think your point of time-to-market is a key to start thinking about how we undertake the test design up front.

    Your approach is generic, and isn't UnitTest only. We have used similar structures using Watir as well, and also have built a similar framework on top of commecial tools such as Mercury QTP and Compuware TestPartner.

  8. Kelvin, you are absolutely right! This process even had a feedback on test design. Soon we came up with rules and recommendations to test designers how to make tests more feasible for automation. This also helps increasing automation ROI.