Tuesday, July 12, 2011

CMM - a long way to perfection

Lately one of the customers expressed the great desire to step on improvement path. They have chosen CMM for a beacon. I appreciated their choice and proposed my help as I was-there-done-that so to speak.

First they have assess as being at CMM2 already which caused a lot of suspicious on my side. First of all, they do not have plans. Even if they do they never follow those. Second, they do not have requirements in any form. Third is the communication. News, issues, problems, goals and the rest are just broad-casted to everyone in hope that a person to whom the message is addressed will somehow guess it. In sum, the whole project performed by another contractor (not my company) looks like a mess or a chaos. The chance that a good product appears out of it is as big as the chance that our country's football team takes the gold in world championships.

I estimated the organization as being at initial level of maturity model and suggested to stick to the selected list of improvements that I like to provide here to your attention. If you have any comments or feedback just let me know. You opinion is important to me.

I sat down and though of the following changes to be made right away. There are no details so far, just a list, so if there is any ambiguity please ask me for the clarification.

1. Planning
a. There must be a plan and everyone must think it’s important
b. Plans must be realistic
c. Risks should be taken into account while planning
d. I can provide a template for the start
2. Product requirements engineering
a. Elaborated before the development starts
b. Reviewed by all parties involved
c. Corrected accordingly to review issues and adjusted to the needs of everyone
d. Numbered uniquely
e. Versioning for requirement documents
3. Development
a. Prototyping
i. Throw away, do not evolve
ii. Once you have a solution then sit down and design how to implement it into real product from the scratch!
b. Architecture design prior to coding
c. Module design before coding
d. Review design (peer or by system architect)
e. Continuous code review
f. Test before letting it out (unit or developer testing)
g. Make issues visible by means of metrics, so everyone knows that a failure will be seen to others
4. Testing
a. Test planning
b. Test strategy elaboration (test plan)
c. Test design (test cases)
d. Iterative testing
e. Regression testing
5. Tracking changes
a. Make changes visible to everyone
b. Adjust plans and risks timely
c. Make trade offs
6. Release procedure
a. Feature freeze date
b. Code freeze date
c. Make only really important changes and fix really important issues
7. Communication
a. Communicate the goals clearly and at all levels (let them know the stakes)
b. No broadcasting, there should be one (known to everyone) person to answer a specific question
c. Defined project roles
d. Single points of contact on planning, requirements, development and testing
e. Reporting and metrics
f. Periodic team meetings (Skype, phone calls)
g. Focus the team on one thing at a time

Look forward to your feedback!

Thursday, June 23, 2011

CEO, are you ready to change?

Many want to get better in sense of product quality their organization produces. Few actually get successful. Many of those who don't blame their managers for not performing to the expected level and bla-bla-bla. Meanwhile, the problem may be in the leader him- or herself.

The goal of making better quality implies changes throughout the organization. The head is not an exception. The problem with bosses is their belief they can drive things with only desire. Well my daughter is 4 yours old and she also believe she can ;) When you are about to start changes in the company start from yourself. This is very easy to think that someone else will do all the hard job. Ho one will be in position to do this if the problem that prevents changes is at the top because ho one has authority to affect boss' behavior.

Late changes is a killer of quality processes. Even agile can't stand late changes. If you are boss then you must learn "enough is enough" principle. Learn to distinguish "a must" from "nice to have" to avoid forcing your team and processes to collapse into late changes nightmare. More important things may be affected because what boss wants comes first attitude.

Yes, most of top guys are very selfish and addicted to the thing that they are the only persons who know how to drive things. This is far not always so. Let other drive for a while and see what happens. Sometimes you even have to become followers. Listen to what quality people tell you very carefully and trust them when you are even not sure they do what you think is right. They are people who know their work better than you. Don't pretend you can cook better than a chief at your favorite restaurant ;)

Sorry, for being a bit clumsy. I have little time. Now back to work and remember - ENOUGH IS ENOUGH :)

Wednesday, June 8, 2011

Load testing

Today I have written to a customer on what we usually do about load/performance testing. I decided to put it here should I need it again as well as to help you organize our load testing activities.

***

Load testing usually start from learning about the customer's problem. Every load testing session is unique and this is very important to start moving right direction from the beginning.

After learning the purpose of testing we develop testing strategy. Test strategy include the definition of load scenarios (how many virtual users are to be involved, where to put the load in the system (at which hierarchy level), what type of scripts to use (simulative or fast), etc.)

Then we start working on scenarios. Scenarios are the individual scripts which will be played back by virtual users. Customer input is vital because domain knowledge is the key to creating the right set of scenarios.

Then we execute several test runs with different load level to see how server reacts and to make sure that load is adequate and test results are not affected by configuration or communication issues. Usually it takes from 5 to 10 runs to finalize test plans and conditions.

A very tight communication with development representative and deployment team is very important. We have to make sure the testing is performed in clean conditions, whereas no one else can interfere and alter test results.

During the testing we usually set up server-side monitors to find the resource-bottleneck, if any.

***

Having done all above you will make sure that the problem is solved, not just a load test performed.

Wednesday, June 1, 2011

Risk management in Test Planning

Did you ever ask yourself what testing does in the big perspective? Surely you did :) My answer would be: testing is reducing the risk that users will face any issue with the software. In this respect this is very close to what other engineering specialists do. Same way they reduce risk that building collapses on a severe earthquake or that a car will suddenly be on flame while you drive.

Did you ever wonder what techniques other engineers use? Well, I guess most of didn't, in which case this article will be of help.

I have worked with the software that implemented one of most used strategy for risk reduction - Failure Mode and Effect Analysis (hereafter FMEA for the sake of conciseness).

The key in risk management is keeping the possible bad effects in sight. So, it's all about perception. You need to imagine what can go wrong with your application in user hands and build up the strategy how to prevent that particular risk from happening. In case of car manufacturing there can be the risk of losing a wheel at a high speed. Engineers will think of special means to prevent a wheel from spinning away even if the nuts get loose. Solving one such problem engineers advance to the next one, starting from most severe one (severity is this case is a combination of probability and impact).

Now back to the software testing. The risk that our users may face are caused by software defects. Of course this is unrealistic to foresee the defects. But we can foresee the consequences of malfunctions. Let's start from the very beginning. The very first thing a user do is trying to set up the application. So, the worst thing that may happen is setup failure preventing further use of the software. We have found the failure mode, but this is not enough to start creating prevention strategy as this risk is too obscure. The point is to be as precise in description mode as possible.

The setup may fail in many different ways:

- Setup failure in the default setup conditions
- Setup failure due to changing setup options
- Setup failure in unattended mode
- Setup failure while running through the deployment center
- Setup failure due to software compatibility
- Setup failure due to hardware compatibility
- Setup failure due to old version compatibility
- Setup racing failure
- Setup performance is unacceptable low

Now we have the list of possible failure scenarios that can be addressed by testing. Each of the items above has a different combination of probability and impact. For example, "Setup failure in the default setup conditions" may have low probability but highest impact. Meanwhile, risk "Setup failure while running through the deployment center" may have lower impact but the highest probability.

FMEA has a lot of templates that will help to summarize and analyze this information. You can easily find the on the internet. Most of those will be overburdened with the information you will hardly need in analysis, so I suggest my own variant of the table:

## | Failure mode | Impact | Probability | Prevention | Comments |

Prevention depends on the context, so I can only provide you several examples.

One of the systems that I worked on with test team was a very old and big client-server. The reliability was the real problem, so I have put this risk on a table and started to think of the ways to change things to better. The probability of the failure mode was estimated as medium, the impact was highest. Prevention included non-stop automated reliability tests on a dedicated server 24x7. In result, it helped to find major issues that could never be found by other types of testing.

So, the bottom line is:

- Strategies developed in other industries work in software development too, so don't just neglect those finding only because we-are-so-different. This is not the case, nor the excuse.

- Try to foresee possible failure scenarios.

- Build up prevention strategy (which can include not testing only, but all process stages)

- Define a plan where all the required prevention steps will be listed.

- Work to the plan but keep an eye on FMEA table. Things may change as you go. So, the correction may be needed.


Hope this helps! I would be happy to learn what you think.

Friday, April 22, 2011

Estimation of testing

If you need to ground testing estimates on development estimates then read on.

This is definitely not the best way of producing testing estimation. It would be more correct to ask development and testing do their estimates independently. Later on you can use both estimates to analyze the difference and to arrange those properly.

In most of cases testing takes from 30% to 35% of development estimates. Taking 35% you will lend enough time for your team to complete the goals on time and with due quality.

But be careful! There can be tasks, which execution may run out of the initial estimates. For example:

• Performance testing
• Load testing
• Compatibility testing
• Testing on a real (big) amount of data or in real hardware
• Reliability testing
• Test automation
• Complex environment setup
• Complex business area (business context)

All above as well as the risks and any kind of exotic testing should be estimated separately and the result should be added to the initial rough estimates.


Here is the full algorithm as I see it:

1. Both developers and tester do their estimates separately.
2. Then estimates are compared and the difference is being explained.
3. If the ratio in estimates is as expected you are done.
4. If not - the difference is explained and a corrective actions are taken.


Notes:
• Do not take the biggest "just in case" - this is inefficient.
• Do not take the "most trustful" - explain.


The biggest ratio I have ever met in my career was 41% of development efforts. It was due to heavy use of test automation and with complex test environment.

Happy estimation! :)

Friday, January 14, 2011

Performance Appraisals

Many of you have heard of this, some were even appraised and few where directly involved in creation of such reviews. No doubt, reviewing someone's contribution helps to develop skills required to drive the team to strategic goals as well as to avoid silly mistakes. But today I am not going to judge upon the process of performance reviews, nor I am here to express ideas on how to do it right. The mere fact I am writing this is to resolve a potential risk for those of you who just start to implement this process.

Some believe that performance appraising (PA) is a great tool in managing someone's... expectations of the compensation. So, it's... wrong!

We should not mess together things that are better to keep apart (herring and milk so to say), things that serve different purposes. Compensation is a way to keep employee from leaving the company. PA is a tool in hands of a manager but it is written for the employee. These goals are not the same. If you pretend they are, you will end up having a logical inconsistency because if you introduce salary parameter into equation then you will inevitably write a review which will serve your goals on keeping the salaries down or keeping the employee. Anyways there will be no "written for employee" thing anymore.

So, don't mess it up. Write reviews in order to help people to develop the required skills and fix the problems. Use salary to keep and motive people to move ahead. No need to blend it all together if you want to stay mentally healthy ;)

If I confused things for you just leave me a note. It's my bad and I will do my best to help you out :)

Wednesday, October 13, 2010

ISC UI testing architecture

Several days ago, while working on a following test automation project for a desktop application an idea how to make it right the first time has struck me. This is the first time I am writing on this matter, so be my co-authors and provide any thoughts which happen to come to your mind while reading this. Any feedback will be greatly appreciated.

Dealing with UI looks simple at the first glance but it is rather tricky afterwards. All the time you have to do some UI testing (confirmation messages, cancel buttons, input error messages and the like) you have to step back and to change (refactor) the code that you have already wrote.

For example, you have created a function that opens a project file in your application. It takes the name of the file for the input parameter. Then it click menu File/Open, fills in file path and click button Ok.

You created the tests that rely of that function. Several tests may open different files and validate their content. But now you need to create a test, which will open an incorrect file. In result you shall see a message telling you what is wrong with it. But your original function does rely on smooth flawless file opening. It does not take into account any errors which may come up. So, we go back to the original file opening function and refactor it in order to fit both needs (usually it is done by creating a new function that does only part of the job).

This is only on example of hundreds or even thousands for a big project. And any time you have to step back, do changes and test the changes. This is tricky and error-prone.

Instead I am suggesting DOING THINGS RIGHT THE FIRST TIME - Invoker-Setter-Confirmer (ISC) test architecture.

ISC is a UI test architecture that allows you to think in terms of functional bricks of which you can build any kind of UI based tests. You will not need to go back afterward, I promise. If you do, I am gonna eat my hat :)

ISC stands for the set of functions, each with its own purpose described below.

Invokers - functions responsible for making a UI element to appear on the screen. In most cases it can be done with more than one way (File Open dialog may be caller through menu, toolbar, accelerators and hot keys).

Setters - functions responsible for setting input values into UI element. For example, find dialog requests user to set a string to find, direction of the search and other properties. All those values shall be set through a setter functions.

Confirmers - functions which do some actions with UI element. For a dialog they click a button or close it with Esc or Alt+F4. Just like invocation the confirmation can be done in different ways.

The best way to explain it is to show an example. Here is goes.

Everyone has a Notepad program (Mac users, I am very sorry). Let's create ISC-based library for Find dialog.

// code is in Java Script

// constants
notepad = Aliases.notepad;
wndNotepad = notepad.wndNotepad;

ACCELERATOR = 1;
MENU = 2;
HOTKEYS = 3;

OK = 10;
CANCEL = 11;
ESC = 12;
ALTF4 = 13;

//////////////////////////////////////
// Invoker
//////////////////////////////////////
function InvokerDlgFind(method) {
switch(method) {
case ACCELERATOR :
wndNotepad.Keys("[Ctrl]F");
break;
case MENU :
wndNotepad.MainMenu.Click("Edit|Find...");
break;
case HOTKEYS :
wndNotepad.Keys("[Alt]EF");
break;
default:
InvokerDlgFind(MENU);
break;
}
// No more code. Just show this thing on
// the screen and leave.
}

//////////////////////////////////////
// Setter
//////////////////////////////////////
function SetDlgFind(toFind, case, direction) {
dlgFind.Edit.wText = toFind;
dlgFind.checkMatchCase.ClickButton(case);
if(direction) {
dlgFind.radioUp.ClickButton();
} else {
dlgFind.radioDown.ClickButton();;
}
// that's it! Setters do not do anything else.
}

//////////////////////////////////////
// Confirmer
//////////////////////////////////////
function ConfirmDlgFind(method) {
switch(method) {
case OK :
notepad.dlgFind.btnOK.ClickButton();
break;
case CANCEL :
notepad.dlgFind.btnCancel.ClickButton();
break;
case ESC :
notepad.dlgFind.Keys("[ESC]");
break;
case ALTF4 :
notepad.dlgFind.Keys("[Alt][F4]");
break;
default:
ConfirmDlgFind(OK);
break;
}
}

// Now how it looks in the tests

function Test_Find_Succeeded() {
// open test file

InvokeDlgFind(MENU);
SetDlgFind("123", false, true);
ConfirmDlgFind(OK);

// verify find result
}

function Test_Find_NotFound() {
// open test file

InvokeDlgFind(MENU);
SetDlgFind("456", false, true);
ConfirmDlgFind(OK);

// verify message "not found"
}

function Test_Find_EmptyInput() {
// open test file

InvokeDlgFind(MENU);
SetDlgFind("", false, true);

// verify button is disabled
}

Hope this helps. Will be glad to hear from you on this matter.