Tuesday, August 31, 2010

What a developer wants?

After so many years of driving testing and QA processes I have some possibilities to realize and rights to publish here some of ideas on how things are working from the inside. One of such things is what developers ask of a QA manager.

My top 5 are following:

1. Change defect tracking system

2. Stop reporting duplicates and non-defects

3. Do no find defects too late

4. Stop reporting minor/cosmetic defects in such a number

5. Test design is not review-able

Now back to the details =)

Change the defects tracking system I am always happy when people come to me with their ideas which they think will help them doing their work better. However, I don't like when it stems from the "I don't know what is wrong but I feel that something needs to be changed" intention.

In the past, developers came to me complaining that it takes too much time to work with defects. The only solution they saw was to change the tracking system. I asked in respond what exactly does take time when they work with it. As it turned out the reason of their complaints is not in the system per se but in the time they had to spend understanding the defect and trying to reproduce it. No defect tracking system can change that, so they went out in peace with the idea to leave the old system. Others developers came after... period =)

Stop reporting duplicates and non-defects Yes, this is a problem. But let's see how big it is. On my memory we reported not more than 1% of bad defects. Is that so huge to be whining about at each managers meeting? Nope. Then what was it?... I'll tell you. This is an excuse for not doing own job right. Kind of "hey! they also have problems". It makes them look better in they eyes. What a shame! =)

Do no find defects too late Another problem for testers. We have to find serious problems as soon as they appear in the code. There is no excuse... Wait a minute! What issue are speaking about? That one that developers introduced in the same build? How do you think testers could find it earlier?... We hear such acquisitions too many times. Make sure you take the blame for your own mistakes.

Stop reporting minor/cosmetic defects in such a number The answer is "stop reading them". It's easy =)

Test design is not review-able In majority of cases they just don't care to put some efforts in this. This is merely a laziness that cannot be an excuse :) However, providing a summary of your tests design is always helpful. Remember, those who read your design help you making it better, so it's in your interest to make it easier to them.

Certification... Again?!

Today I responded to the question about certification one of IT professionals here in Belarus. She was interested in my opinion about the certification as it is. I once told to one of testers who came to me for the interview that it is important that one cares to have one. On another hand I said many times that I don't care if one has a certificate up his or her sleeves.

I will try to explain it here. Once and for all =)

Firstly, I repeat I don't believe in certification because it is a try to provide a universal approach and solution to everything. Who does believe that it is possible? I personally don't.

What I believe is personal abilities to think, to solve problems and to actively drive to the goals. This is what I am looking in people. Not just a prove they have heard something about the matter.

The only reason I said this is important is because a person who acquired it cared to manage her career, minded to get new knowledge, which are good intentions. However, reading a professional book would do the same, not to say it would be better than most of certification programs.

Hopefully, it helps =)

P.S. From business perspective it also doesn't matter. Testers with and without certification are equally profitable for the company, so the compensations are the same. The fact that the company has engineers with certificates does not help at winning new projects.

Wednesday, April 28, 2010

"Look into the roots"

One of my colleagues who wanted to leave has claimed that the company has done something wrong to him, so he could not stand being in it any more. After 30 minutes of talking to him I could not make an idea what the reason of his complaints is as well as the complaints themselves remained the mystery to me.

He named such things like advertising that we are looking for a senior specialist and did not bother to suggest that position to him. That's was unfortunate because we considered him on very that position several months and he did not manage to impress the customer well enough. I asked "why didn't you come to me to discuss it?" He responded that "Anyways, this is not as big a problem as..."

The next reason he called out was the lack of work in the crisis times. Well I accepted that was the case and many people were sitting on the bench whereas the company tried to find the work for them. They were underpaid but this was done to keep the team. Those who could not stand such conditions have found another job. But that colleague of mine didn’t. I asked him why and didn't get the response either. He used the same tactics and jumped to the next complain which was about his personal attitude to work. He insisted that it was not exciting enough. So this might be a case. Nonetheless this is not the type of a problem one could not decide with a manager. But he never came to me with this.

In the end I decided that he complains not about the conditions but about his own behaviors in those conditions. I quickly figured that I can do nothing about it and let him go to find himself in another place (I purposely did not use word "better" here).

What the wisdom we could elicit from this short story? Obviously, if we cannot stand something or don't like something about our works or in more general sense about our lives them go an change it. Do not wait until the boomerang to hit you in the back.

And never fake the reason why you leave the company. Aside from not giving a chance for the company to fix the problem you make a bad impression of yourslef, so no one will miss you after all.

Friday, April 16, 2010

Estimations in software testing

Today I happened to read through the training course on estimates on testing. The method described was based on assuming the quantity of test cases. Knowing test cases you can relatively easily predict how much time you would need to execute them. But...

The trainer lacked some underlying knowledge of making the estimates. I tried to pretend I am a newbie who came to the training with goal to learn new things. After such a training I would risk to get further from the truth than I was before. I could have my own undertsnading of the matter, which is more correct than the provided.

Trainers beware! Your first commandment is "do not harm"! =)

Some tips of estimates:

- Consider all tasks including preparation, communication etc.
- Weight all factors influencing the tasks
- Break down task list into smaller tasks (2-3 days)
- Consider all possibilities what can go wrong
- Think of the risks
- Watch out of the optimism
- Know your resources
- Do estimates with more than one method and analyze the difference
- If you ask an expert, do not refrain to only one
- Never provide point estimates
- Provide estimation in a range
- Check out the estimate against other projects and analyze the differece
- Re-think risks
- Provide assumptions within the estimates
- Provide estimate with the level of confidence you have in them
- Present what you need to make the estimation more precise
- Validate estimation with other managers
- Check your estimate against estimates of other teams (testing vs. development, documentation vs. testing, etc.) and analyze the difference
- Add buffers for illness, turnover and other force major
- Beware of faulty level of precision (3 man days vs. 2.997 man days)

Good luck! :)

Thursday, April 1, 2010

Test automation cost

Well, as I promised in the previous post, I am here to provide my thoughts on what the cost of automation is. I will try to prioritize parameters that in my opinion may increase the efforts and resources needed to make your automated tests work.

This is well known nowadays that the most important factor of test automation usage is Return Оn the Investment (ROI). In the traditional cost model ROI is determined as:

ROI = benefit / cost

If ROI is above 1 then you are in good shape. If it's below 1 then you have spent more than you gained.

Despite it looks pretty simple the difficulties came into the action when you try to understand what is beyond the terms of benefit and cost.

Benefit

This is not easy to measure what you gain in result of implementing you tests as automated. The problem is that not all of the benefits can be calculated and not all of the positive signs can be directly attributed to test automation. For example, you may discover more defects doing ad-hoc manual testing in time testers have available due to having some tests automated. Or you have found defects that could possibly be difficult to find manually. In both those cases you can't say for sure that this is automation that brought those benefits up.

The good news is that there is a cost parameter that we can directly calculate. I mean the cost of test execution. If you are tracking down what it takes to execute the tests manually (at least with some level of approximation), you can easily tell what you will save having those tests automated.

Let's pretend we have 100 manual tests, which execution usually takes 2 days. After having those tests automated you will save 2 days every time you need to do this. The most important here is "every time" because the more often you need those tests, the more cost effective their automation will be. But do not confuse "the need" to execute with "the desire" to execute. This is easy to fall a victim of self-deceive and draw a wrong conclusion about test automation effectiveness just by confusing the intentions.

So, I would stick to determining the benefit of automation as the sum of benefits you gain from test execution plus a little bit more (feel free to decide what this is going to be for your project: 10%, 20%, or even 50%).

benefit = execution*iterations + a little bit more

Cost

When people think of the cost they usually get confined to implementation. This is all natural because implementation is the first thing that comes to mind. The trick is to be able to see a bigger picture. Implementation is just one of parameters. There are also other factors that may severely affect the cost of automation. Below is a list of factors that can also affect automation ROI.

Experience in test automation

One who starts automation project with no experience in it at all puts himself in a big peril of the failure. Few survived having pursuing this goal without proper skills and knowledge. All the worse, those who aren't become paranoid of the automation and rarely try doing it again.

So if you have no experience in automation, try to get some. Hire a person who will move it from zero point or find a mentor who will team and guide the team. Otherwise there are too many pitfalls to fall into, so I would bet on the success by no means.

I would rate the factor of experience as the bigger in the cost formula. It may lead to more than 100% cost increase.


Maintenance


It's easy to forget because we rarely think of support until we face the need in it. But we will have to sooner than you can imagine. The changes will fire from all the cannons into the hull of you suite attesting to sink it down. The only way of staying afloat is to be able to quickly handle them.

This is not just about the code that you write. Of course it must be supportable. In this sense writing automated test is no different from writing a program code. This is also about the process that you follow. You can build a direct informational channel between developers and test engineers, so as the latter know about the upcoming changes and have time to prepare. Or you can let things go as they are, facing the consequences of massive changes 1 week before release, when all the tests you have automated appear to be broken and you have no idea why.

Maintenance factor from my experience is about 40% for large projects. You may have it down to 20% in a small project though.

Experience with tools

This is not s big as the previous ones but it counts too. Tools are so different. I worked with several and each has something that made me think "Common it cannot be that... weird!" So, you need to keep in mind that every tool has its own weirdness that you will face and you will need to go about.

The most important this parameter is for big projects. Test automation project architecture is the most important in that case. Architecture is built upon limitations that a tool had and there is almost no possibility to work around it. So be prepared to inventing new architectural patterns.

I would rate the increase this parameter may bring as 30%.

Suite debugging cost


Having a test implemented and passed is not all. Tests should work in the suites as well. If a test passes when executed alone does not guarantee it does the same when executed in a suite, on another machine and with other credentials.

So the tests should be designed to run smoothly in the suites, for which they are created. And it takes some additional efforts to be accounted for.

Test suites debugging is +10% in my experience.

The cost formula

cost = implementation cost + implementation cost*(experience% + tool experience% + maintenance% + suite debugging%)

The conclusion

As you see from the above test automation can only be successful when all positive factors are maximized having all the negative ones mitigated. This is in your hands to organize the process so as to get the biggest possible benefit.

I never use the scheme above as I learned how to incorporate all those factors into decision making many times when in my professional career. But if you find it useful I will be happy to know that I have spent my time here not in vain =)

Automation is a must!

Recently I've got some free time and implemented several automated tests for an application being in testing and development for quite a long time. I did not expect much from my tests but cutting the cost of re-testing the basics of the system functionality under different operating systems.

I have implemented only 4 tests that I have executed against only one operating system. To my sheer surprise the application crashed. It crashed on doing the same sequence of operations it has just did successfully in the previous tests. Hm... I have re-run the entire suite once again and it failed. Wow!

This is not the first time I noted that automated tests is not necessarily a replacement for manual tests. It is rather an addition to it. Automated tests help finding defects one could not find manually and vice versa.

Having said that there are issues your testers are unlikely to find manually, I dare say that automation is absolutely a must for any type of a project. Thos of you who compromise this idea to project cost simply do compromising the quality.

The cost of automation is not as bad as it may seem to be. I will write on this a little more in the following blogs. As of now, I may assure you the cost is not that big as you may imagine. For example, for those 4 tests I wrote slightly more than 200 lines of test code (just 200 simple plain lines of code!). This is all it takes!

It has taken me only one and a half day to create, debug, and run the entire suite on one platform. It just a little longer than it takes to it once manually. However, now I have a suite that I can run with a single click. I don't need to spend my time on it anymore. So I can focus on what matters instead of doing tedious routine work again and again.

When deciding whether to pursue automation, do not think of the losses. Think of what will have in return. Think of the advantages you will have. Yes it takes time and resources to build a reliable automated suite. However, it is worth of every dime spent on it when you are a little further down the road.

Good luck with your automated testing!

Friday, March 26, 2010

Who decides if the product ready?

In my career I have often been in the situation when I was asked whether or not the product is ready to be shipped. After so many times I was through it I still hesitate when I am asked this question. The reason why I couldn't answer it for sure is that I do not have all the required information to draw a decision.

Being a quality manager all I can judge upon is the quality. But the quality is not the mere criteria to take into consideration when trying to figure if you are ready to release. There are also business demands and limitations, promises made to customers, market situation, corporate strategy, and so on. All those questions are beyond the authority of a quality manager.

So, what to do if you are asked this question yet. Below is how I would behave at different positions.

Quality manager

Above I mentioned that the only criteria you can asses is the quality. You have to start answering this question since the very start of the project. You have to build this answer with all the activities on testing and QA throughout the project timeline. Start building it up from the very beginning - test strategy and keep it in sight all the time. Assess and mitigate quality risks. as you learn new information about the product do the corrections of testing course.

When it's time to answer this question, use all the information you have collected. Just compare metrics against previous similar releases (or against you previous experience with similar systems) and state things as they are, without too much of optimism. If the product is a crap say so. Don't be afraid. You will not be punished for the truth in case you kept saying so all the time when you were asked. Saying that everything is fine during the project and demanding that it's a crap in the end looks unprofessional.

Providing your answer make sure that everyone understands that you are talking of QUALITY CRITERIA ONLY. So that no one can bear an incorrect opinion that you are taking the responsibility for a BUSINESS DECISION.

The best way to say it IMO:
- All the testing we have planned is completed. No issues that are considered critical for the release are open. Latest cycles of testing did not reveal many regression issues. The changes in the latest builds were scarce and undergo all a strict process of risk assessment and regression testing. Fixes which were too risky to do have been moved on to the next version. I can say that we are in good shape from the quality perspective.

In case it's not as good you may also add:
- However, we have experienced significant problems with testing the system under high load. The tools that we have used did not allow us generating required load capacity. So, we do not know how system reacts to peak load. It is a risk for the system operation. After discussion with management we decided that this risk can be accepted.

In the case when it's not good at all:
- Every time we start testing the system it brings many new issues. Defect arrival rate is at almost the same rate all the time. It stays as high as X defects per day till the very end of testing cycle. I would strongly recommend analyzing the reason why we introduce so many defects, improve the process and do another cycle of development and testing to create a product of acceptable quality.

Project manager

As a project manager you have to be sure that your quality manager is confident with the quality of the release. If it is not the case then try to find out the nature of the risks. Assess those risks against project goals and made a decision. It can be a hard one nonetheless you are in better position to make it than anyone else on the project team.

Software tester

I knew several top managers who liked asking it to software testers. They believed that they can get to know the information from first hands, thus testing the correctness of information provided by middle level managers. I don't think this is a good idea. Anyway, as a tester, you need to be prepared. Before giving the answer, make it clear that you can answer only based on YOUR experience with the system. Your experience may be limited to some module or type of testing. If it went good then say that only that system's feature is OK. Do not pretend you possess all the information to say so about the whole product. This may be a killer. I also was in the situation when I was asked "why do you say that everything goes smooth if your guy told me that his module is full of bugs and he sees no end to it?" or opposite "why do you keep telling me that things go so badly when your guys report the system is fine?". Do not put your manager in the situation like this.

Top manager

Well you have someone to report to (board of directors or something) you also need to weight your words. The most dangerous but exciting about your position is that this decision is completely yours. So, don't show a weakness and don't expect that someone else will step up and do it for you. All the responsibility as well as all the fame is yours.

Before making a decision, ask your managers (quality and production). But don't be pushy. Do not force them saying what you want to hear. Be as objective as you can.

It's all for now. Sorry for a mess in thoughts I wrote it real fast. Hope this helps you avoid embarrassing situation when you are asked if the product is ready to be released.