Wednesday, October 13, 2010

ISC UI testing architecture

Several days ago, while working on a following test automation project for a desktop application an idea how to make it right the first time has struck me. This is the first time I am writing on this matter, so be my co-authors and provide any thoughts which happen to come to your mind while reading this. Any feedback will be greatly appreciated.

Dealing with UI looks simple at the first glance but it is rather tricky afterwards. All the time you have to do some UI testing (confirmation messages, cancel buttons, input error messages and the like) you have to step back and to change (refactor) the code that you have already wrote.

For example, you have created a function that opens a project file in your application. It takes the name of the file for the input parameter. Then it click menu File/Open, fills in file path and click button Ok.

You created the tests that rely of that function. Several tests may open different files and validate their content. But now you need to create a test, which will open an incorrect file. In result you shall see a message telling you what is wrong with it. But your original function does rely on smooth flawless file opening. It does not take into account any errors which may come up. So, we go back to the original file opening function and refactor it in order to fit both needs (usually it is done by creating a new function that does only part of the job).

This is only on example of hundreds or even thousands for a big project. And any time you have to step back, do changes and test the changes. This is tricky and error-prone.

Instead I am suggesting DOING THINGS RIGHT THE FIRST TIME - Invoker-Setter-Confirmer (ISC) test architecture.

ISC is a UI test architecture that allows you to think in terms of functional bricks of which you can build any kind of UI based tests. You will not need to go back afterward, I promise. If you do, I am gonna eat my hat :)

ISC stands for the set of functions, each with its own purpose described below.

Invokers - functions responsible for making a UI element to appear on the screen. In most cases it can be done with more than one way (File Open dialog may be caller through menu, toolbar, accelerators and hot keys).

Setters - functions responsible for setting input values into UI element. For example, find dialog requests user to set a string to find, direction of the search and other properties. All those values shall be set through a setter functions.

Confirmers - functions which do some actions with UI element. For a dialog they click a button or close it with Esc or Alt+F4. Just like invocation the confirmation can be done in different ways.

The best way to explain it is to show an example. Here is goes.

Everyone has a Notepad program (Mac users, I am very sorry). Let's create ISC-based library for Find dialog.

// code is in Java Script

// constants
notepad = Aliases.notepad;
wndNotepad = notepad.wndNotepad;

ACCELERATOR = 1;
MENU = 2;
HOTKEYS = 3;

OK = 10;
CANCEL = 11;
ESC = 12;
ALTF4 = 13;

//////////////////////////////////////
// Invoker
//////////////////////////////////////
function InvokerDlgFind(method) {
switch(method) {
case ACCELERATOR :
wndNotepad.Keys("[Ctrl]F");
break;
case MENU :
wndNotepad.MainMenu.Click("Edit|Find...");
break;
case HOTKEYS :
wndNotepad.Keys("[Alt]EF");
break;
default:
InvokerDlgFind(MENU);
break;
}
// No more code. Just show this thing on
// the screen and leave.
}

//////////////////////////////////////
// Setter
//////////////////////////////////////
function SetDlgFind(toFind, case, direction) {
dlgFind.Edit.wText = toFind;
dlgFind.checkMatchCase.ClickButton(case);
if(direction) {
dlgFind.radioUp.ClickButton();
} else {
dlgFind.radioDown.ClickButton();;
}
// that's it! Setters do not do anything else.
}

//////////////////////////////////////
// Confirmer
//////////////////////////////////////
function ConfirmDlgFind(method) {
switch(method) {
case OK :
notepad.dlgFind.btnOK.ClickButton();
break;
case CANCEL :
notepad.dlgFind.btnCancel.ClickButton();
break;
case ESC :
notepad.dlgFind.Keys("[ESC]");
break;
case ALTF4 :
notepad.dlgFind.Keys("[Alt][F4]");
break;
default:
ConfirmDlgFind(OK);
break;
}
}

// Now how it looks in the tests

function Test_Find_Succeeded() {
// open test file

InvokeDlgFind(MENU);
SetDlgFind("123", false, true);
ConfirmDlgFind(OK);

// verify find result
}

function Test_Find_NotFound() {
// open test file

InvokeDlgFind(MENU);
SetDlgFind("456", false, true);
ConfirmDlgFind(OK);

// verify message "not found"
}

function Test_Find_EmptyInput() {
// open test file

InvokeDlgFind(MENU);
SetDlgFind("", false, true);

// verify button is disabled
}

Hope this helps. Will be glad to hear from you on this matter.

Tuesday, September 28, 2010

Defect tracking workflow

Lately I was involved in creating yet another defects tracking workflow for a big outsourcing company. The goal was to define a standard way of tracking defects keeping in mind the support for defect analysis and prevention.

The task per se does not look too difficult. In most of cases, the standard workflow that your DTS has embedded will do the job four you. But it does not in our case. The workflow for defects defined as default in JIRA is too minimalistic. We used to have additional states and transitions that we want to be noticed and filed by the system.

So we have come up with several additional states. Some default states were renamed. Some transition rules were added and permissions changed. But I do not want to get into a boring description of how we were doing it. I want to outline what I think is most important in building workflows:

1. Keep it simple, stupid!

The all times rule of thumb is more than just applicable here. We can ideate dozens of states and hundreds of names for transitions in pursue for the perfection. But do we really need to get perfect in this case? get back to the intention that you had starting all this. The intentions were: have means to automate defect tracking to make sure nothing is wasted or lost; have data records to support learning from mistakes; have means to control the project effectively. That's it! So, if you have enough states and transitions to satisfy those requirements - stop right there, don't go any further.

2. More flexibility, more problems

If you allow to much flexibility in sense of who does what in the DTS you lend yourself and your colleagues for numerous mistakes. This is always better to let people do only those actions that they are supposed to, no more. In terms of defining workflows it gets down to defining user groups and assigning permitted actions for those groups. Once you've done it you are sure that no issue will go a wrong path.

3. Self explanatory, intuitive names

States, transitions, priority and severity of issues - all this must not require additional explanation. Name "Deferred" or "Postponed" is better than "Not for fixing" or "Not important".

4. Test it twice before going live!

Have somebody else to go through the workflow before letting others use it. I did for my last installment and I wasn't surprised to fix 6 critical issues in it despite I was sure I passed it through myself.

Happy workflow engineering! :)

Monday, September 20, 2010

S60 emulator connection problem

If you experience this problem the first thing to try is:

- In Eclipse go to Window / Preferences / Debug and set timeouts to 100000.

If it does not help then try...

- to start emulator manually in Dedug mode and then start debugging from Eclipse.


In case the latter did not help and you receive something like "You could start only one instance of..." then close emulator and start it again.

Wednesday, September 15, 2010

TestComplete 7.X refuses to recognize .NET objects?

If you cannot see .NET objects with appropriate icon in Object Browser, then most likely you have no corresponding plug-in installed or you have .NET 4.0. Check you extension settings and install .NET plugin. In the later case uninstalling .NET 4.0 will not solve the problem. You will need to re-install whole system. Alas!

LoadRunner crashes on recording?

Here is the cure...

1. Control Panel / System
2. Advanced
3. Performance Settings
4. Data Execution Prevention
5. Turn on DEP for essential Windows programs and services only.
6. Restart!

That's it :)

Friday, September 10, 2010

Load testing is not easy

Recently I have convinced in it myself yet another time. The task looked as simple as generating load on a site accepting several files for parameters. As usual it was easy to record and parametrize scripts, debug and plan the first load.

The difficulties appear during the testing. Every time when I approach load testing the results kind of amaze me. They are not what I would expect. This time was no different. I was amazed and did at least 5 times more test execution than planned originally.

Thanks to my managerial experience I have lend myself some time for contingencies. So it worked well and I met the deadline :)

Every time you plan for load testing keep in mind that this is investigation rather than determined work in sense that we put in it when we approach planning.

Tuesday, August 31, 2010

What a developer wants?

After so many years of driving testing and QA processes I have some possibilities to realize and rights to publish here some of ideas on how things are working from the inside. One of such things is what developers ask of a QA manager.

My top 5 are following:

1. Change defect tracking system

2. Stop reporting duplicates and non-defects

3. Do no find defects too late

4. Stop reporting minor/cosmetic defects in such a number

5. Test design is not review-able

Now back to the details =)

Change the defects tracking system I am always happy when people come to me with their ideas which they think will help them doing their work better. However, I don't like when it stems from the "I don't know what is wrong but I feel that something needs to be changed" intention.

In the past, developers came to me complaining that it takes too much time to work with defects. The only solution they saw was to change the tracking system. I asked in respond what exactly does take time when they work with it. As it turned out the reason of their complaints is not in the system per se but in the time they had to spend understanding the defect and trying to reproduce it. No defect tracking system can change that, so they went out in peace with the idea to leave the old system. Others developers came after... period =)

Stop reporting duplicates and non-defects Yes, this is a problem. But let's see how big it is. On my memory we reported not more than 1% of bad defects. Is that so huge to be whining about at each managers meeting? Nope. Then what was it?... I'll tell you. This is an excuse for not doing own job right. Kind of "hey! they also have problems". It makes them look better in they eyes. What a shame! =)

Do no find defects too late Another problem for testers. We have to find serious problems as soon as they appear in the code. There is no excuse... Wait a minute! What issue are speaking about? That one that developers introduced in the same build? How do you think testers could find it earlier?... We hear such acquisitions too many times. Make sure you take the blame for your own mistakes.

Stop reporting minor/cosmetic defects in such a number The answer is "stop reading them". It's easy =)

Test design is not review-able In majority of cases they just don't care to put some efforts in this. This is merely a laziness that cannot be an excuse :) However, providing a summary of your tests design is always helpful. Remember, those who read your design help you making it better, so it's in your interest to make it easier to them.

Certification... Again?!

Today I responded to the question about certification one of IT professionals here in Belarus. She was interested in my opinion about the certification as it is. I once told to one of testers who came to me for the interview that it is important that one cares to have one. On another hand I said many times that I don't care if one has a certificate up his or her sleeves.

I will try to explain it here. Once and for all =)

Firstly, I repeat I don't believe in certification because it is a try to provide a universal approach and solution to everything. Who does believe that it is possible? I personally don't.

What I believe is personal abilities to think, to solve problems and to actively drive to the goals. This is what I am looking in people. Not just a prove they have heard something about the matter.

The only reason I said this is important is because a person who acquired it cared to manage her career, minded to get new knowledge, which are good intentions. However, reading a professional book would do the same, not to say it would be better than most of certification programs.

Hopefully, it helps =)

P.S. From business perspective it also doesn't matter. Testers with and without certification are equally profitable for the company, so the compensations are the same. The fact that the company has engineers with certificates does not help at winning new projects.

Wednesday, April 28, 2010

"Look into the roots"

One of my colleagues who wanted to leave has claimed that the company has done something wrong to him, so he could not stand being in it any more. After 30 minutes of talking to him I could not make an idea what the reason of his complaints is as well as the complaints themselves remained the mystery to me.

He named such things like advertising that we are looking for a senior specialist and did not bother to suggest that position to him. That's was unfortunate because we considered him on very that position several months and he did not manage to impress the customer well enough. I asked "why didn't you come to me to discuss it?" He responded that "Anyways, this is not as big a problem as..."

The next reason he called out was the lack of work in the crisis times. Well I accepted that was the case and many people were sitting on the bench whereas the company tried to find the work for them. They were underpaid but this was done to keep the team. Those who could not stand such conditions have found another job. But that colleague of mine didn’t. I asked him why and didn't get the response either. He used the same tactics and jumped to the next complain which was about his personal attitude to work. He insisted that it was not exciting enough. So this might be a case. Nonetheless this is not the type of a problem one could not decide with a manager. But he never came to me with this.

In the end I decided that he complains not about the conditions but about his own behaviors in those conditions. I quickly figured that I can do nothing about it and let him go to find himself in another place (I purposely did not use word "better" here).

What the wisdom we could elicit from this short story? Obviously, if we cannot stand something or don't like something about our works or in more general sense about our lives them go an change it. Do not wait until the boomerang to hit you in the back.

And never fake the reason why you leave the company. Aside from not giving a chance for the company to fix the problem you make a bad impression of yourslef, so no one will miss you after all.

Friday, April 16, 2010

Estimations in software testing

Today I happened to read through the training course on estimates on testing. The method described was based on assuming the quantity of test cases. Knowing test cases you can relatively easily predict how much time you would need to execute them. But...

The trainer lacked some underlying knowledge of making the estimates. I tried to pretend I am a newbie who came to the training with goal to learn new things. After such a training I would risk to get further from the truth than I was before. I could have my own undertsnading of the matter, which is more correct than the provided.

Trainers beware! Your first commandment is "do not harm"! =)

Some tips of estimates:

- Consider all tasks including preparation, communication etc.
- Weight all factors influencing the tasks
- Break down task list into smaller tasks (2-3 days)
- Consider all possibilities what can go wrong
- Think of the risks
- Watch out of the optimism
- Know your resources
- Do estimates with more than one method and analyze the difference
- If you ask an expert, do not refrain to only one
- Never provide point estimates
- Provide estimation in a range
- Check out the estimate against other projects and analyze the differece
- Re-think risks
- Provide assumptions within the estimates
- Provide estimate with the level of confidence you have in them
- Present what you need to make the estimation more precise
- Validate estimation with other managers
- Check your estimate against estimates of other teams (testing vs. development, documentation vs. testing, etc.) and analyze the difference
- Add buffers for illness, turnover and other force major
- Beware of faulty level of precision (3 man days vs. 2.997 man days)

Good luck! :)

Thursday, April 1, 2010

Test automation cost

Well, as I promised in the previous post, I am here to provide my thoughts on what the cost of automation is. I will try to prioritize parameters that in my opinion may increase the efforts and resources needed to make your automated tests work.

This is well known nowadays that the most important factor of test automation usage is Return Оn the Investment (ROI). In the traditional cost model ROI is determined as:

ROI = benefit / cost

If ROI is above 1 then you are in good shape. If it's below 1 then you have spent more than you gained.

Despite it looks pretty simple the difficulties came into the action when you try to understand what is beyond the terms of benefit and cost.

Benefit

This is not easy to measure what you gain in result of implementing you tests as automated. The problem is that not all of the benefits can be calculated and not all of the positive signs can be directly attributed to test automation. For example, you may discover more defects doing ad-hoc manual testing in time testers have available due to having some tests automated. Or you have found defects that could possibly be difficult to find manually. In both those cases you can't say for sure that this is automation that brought those benefits up.

The good news is that there is a cost parameter that we can directly calculate. I mean the cost of test execution. If you are tracking down what it takes to execute the tests manually (at least with some level of approximation), you can easily tell what you will save having those tests automated.

Let's pretend we have 100 manual tests, which execution usually takes 2 days. After having those tests automated you will save 2 days every time you need to do this. The most important here is "every time" because the more often you need those tests, the more cost effective their automation will be. But do not confuse "the need" to execute with "the desire" to execute. This is easy to fall a victim of self-deceive and draw a wrong conclusion about test automation effectiveness just by confusing the intentions.

So, I would stick to determining the benefit of automation as the sum of benefits you gain from test execution plus a little bit more (feel free to decide what this is going to be for your project: 10%, 20%, or even 50%).

benefit = execution*iterations + a little bit more

Cost

When people think of the cost they usually get confined to implementation. This is all natural because implementation is the first thing that comes to mind. The trick is to be able to see a bigger picture. Implementation is just one of parameters. There are also other factors that may severely affect the cost of automation. Below is a list of factors that can also affect automation ROI.

Experience in test automation

One who starts automation project with no experience in it at all puts himself in a big peril of the failure. Few survived having pursuing this goal without proper skills and knowledge. All the worse, those who aren't become paranoid of the automation and rarely try doing it again.

So if you have no experience in automation, try to get some. Hire a person who will move it from zero point or find a mentor who will team and guide the team. Otherwise there are too many pitfalls to fall into, so I would bet on the success by no means.

I would rate the factor of experience as the bigger in the cost formula. It may lead to more than 100% cost increase.


Maintenance


It's easy to forget because we rarely think of support until we face the need in it. But we will have to sooner than you can imagine. The changes will fire from all the cannons into the hull of you suite attesting to sink it down. The only way of staying afloat is to be able to quickly handle them.

This is not just about the code that you write. Of course it must be supportable. In this sense writing automated test is no different from writing a program code. This is also about the process that you follow. You can build a direct informational channel between developers and test engineers, so as the latter know about the upcoming changes and have time to prepare. Or you can let things go as they are, facing the consequences of massive changes 1 week before release, when all the tests you have automated appear to be broken and you have no idea why.

Maintenance factor from my experience is about 40% for large projects. You may have it down to 20% in a small project though.

Experience with tools

This is not s big as the previous ones but it counts too. Tools are so different. I worked with several and each has something that made me think "Common it cannot be that... weird!" So, you need to keep in mind that every tool has its own weirdness that you will face and you will need to go about.

The most important this parameter is for big projects. Test automation project architecture is the most important in that case. Architecture is built upon limitations that a tool had and there is almost no possibility to work around it. So be prepared to inventing new architectural patterns.

I would rate the increase this parameter may bring as 30%.

Suite debugging cost


Having a test implemented and passed is not all. Tests should work in the suites as well. If a test passes when executed alone does not guarantee it does the same when executed in a suite, on another machine and with other credentials.

So the tests should be designed to run smoothly in the suites, for which they are created. And it takes some additional efforts to be accounted for.

Test suites debugging is +10% in my experience.

The cost formula

cost = implementation cost + implementation cost*(experience% + tool experience% + maintenance% + suite debugging%)

The conclusion

As you see from the above test automation can only be successful when all positive factors are maximized having all the negative ones mitigated. This is in your hands to organize the process so as to get the biggest possible benefit.

I never use the scheme above as I learned how to incorporate all those factors into decision making many times when in my professional career. But if you find it useful I will be happy to know that I have spent my time here not in vain =)

Automation is a must!

Recently I've got some free time and implemented several automated tests for an application being in testing and development for quite a long time. I did not expect much from my tests but cutting the cost of re-testing the basics of the system functionality under different operating systems.

I have implemented only 4 tests that I have executed against only one operating system. To my sheer surprise the application crashed. It crashed on doing the same sequence of operations it has just did successfully in the previous tests. Hm... I have re-run the entire suite once again and it failed. Wow!

This is not the first time I noted that automated tests is not necessarily a replacement for manual tests. It is rather an addition to it. Automated tests help finding defects one could not find manually and vice versa.

Having said that there are issues your testers are unlikely to find manually, I dare say that automation is absolutely a must for any type of a project. Thos of you who compromise this idea to project cost simply do compromising the quality.

The cost of automation is not as bad as it may seem to be. I will write on this a little more in the following blogs. As of now, I may assure you the cost is not that big as you may imagine. For example, for those 4 tests I wrote slightly more than 200 lines of test code (just 200 simple plain lines of code!). This is all it takes!

It has taken me only one and a half day to create, debug, and run the entire suite on one platform. It just a little longer than it takes to it once manually. However, now I have a suite that I can run with a single click. I don't need to spend my time on it anymore. So I can focus on what matters instead of doing tedious routine work again and again.

When deciding whether to pursue automation, do not think of the losses. Think of what will have in return. Think of the advantages you will have. Yes it takes time and resources to build a reliable automated suite. However, it is worth of every dime spent on it when you are a little further down the road.

Good luck with your automated testing!

Friday, March 26, 2010

Who decides if the product ready?

In my career I have often been in the situation when I was asked whether or not the product is ready to be shipped. After so many times I was through it I still hesitate when I am asked this question. The reason why I couldn't answer it for sure is that I do not have all the required information to draw a decision.

Being a quality manager all I can judge upon is the quality. But the quality is not the mere criteria to take into consideration when trying to figure if you are ready to release. There are also business demands and limitations, promises made to customers, market situation, corporate strategy, and so on. All those questions are beyond the authority of a quality manager.

So, what to do if you are asked this question yet. Below is how I would behave at different positions.

Quality manager

Above I mentioned that the only criteria you can asses is the quality. You have to start answering this question since the very start of the project. You have to build this answer with all the activities on testing and QA throughout the project timeline. Start building it up from the very beginning - test strategy and keep it in sight all the time. Assess and mitigate quality risks. as you learn new information about the product do the corrections of testing course.

When it's time to answer this question, use all the information you have collected. Just compare metrics against previous similar releases (or against you previous experience with similar systems) and state things as they are, without too much of optimism. If the product is a crap say so. Don't be afraid. You will not be punished for the truth in case you kept saying so all the time when you were asked. Saying that everything is fine during the project and demanding that it's a crap in the end looks unprofessional.

Providing your answer make sure that everyone understands that you are talking of QUALITY CRITERIA ONLY. So that no one can bear an incorrect opinion that you are taking the responsibility for a BUSINESS DECISION.

The best way to say it IMO:
- All the testing we have planned is completed. No issues that are considered critical for the release are open. Latest cycles of testing did not reveal many regression issues. The changes in the latest builds were scarce and undergo all a strict process of risk assessment and regression testing. Fixes which were too risky to do have been moved on to the next version. I can say that we are in good shape from the quality perspective.

In case it's not as good you may also add:
- However, we have experienced significant problems with testing the system under high load. The tools that we have used did not allow us generating required load capacity. So, we do not know how system reacts to peak load. It is a risk for the system operation. After discussion with management we decided that this risk can be accepted.

In the case when it's not good at all:
- Every time we start testing the system it brings many new issues. Defect arrival rate is at almost the same rate all the time. It stays as high as X defects per day till the very end of testing cycle. I would strongly recommend analyzing the reason why we introduce so many defects, improve the process and do another cycle of development and testing to create a product of acceptable quality.

Project manager

As a project manager you have to be sure that your quality manager is confident with the quality of the release. If it is not the case then try to find out the nature of the risks. Assess those risks against project goals and made a decision. It can be a hard one nonetheless you are in better position to make it than anyone else on the project team.

Software tester

I knew several top managers who liked asking it to software testers. They believed that they can get to know the information from first hands, thus testing the correctness of information provided by middle level managers. I don't think this is a good idea. Anyway, as a tester, you need to be prepared. Before giving the answer, make it clear that you can answer only based on YOUR experience with the system. Your experience may be limited to some module or type of testing. If it went good then say that only that system's feature is OK. Do not pretend you possess all the information to say so about the whole product. This may be a killer. I also was in the situation when I was asked "why do you say that everything goes smooth if your guy told me that his module is full of bugs and he sees no end to it?" or opposite "why do you keep telling me that things go so badly when your guys report the system is fine?". Do not put your manager in the situation like this.

Top manager

Well you have someone to report to (board of directors or something) you also need to weight your words. The most dangerous but exciting about your position is that this decision is completely yours. So, don't show a weakness and don't expect that someone else will step up and do it for you. All the responsibility as well as all the fame is yours.

Before making a decision, ask your managers (quality and production). But don't be pushy. Do not force them saying what you want to hear. Be as objective as you can.

It's all for now. Sorry for a mess in thoughts I wrote it real fast. Hope this helps you avoid embarrassing situation when you are asked if the product is ready to be released.

Friday, March 5, 2010

Bad process or bad implementation?

Recently, in a course of a training session, we touched ground on what seems to be an interesting topic to me. One of training attendees raised a discussion related to pro and cons of different processes. In order to prevent another holy war I warned everyone that it scarcely matter what process people do use. What really matters is whether they know how to use it. Fool with a tool is still a fool.

Same is true to the processes. We have invented several big-name process as well as dozens of not so famous ones, however we keep looking. Why? - Because we are not satisfied with the results. Because we think that things could be organized better. And all we do in this intention is going cycles around several simple ideas (think ahead, do just enough, think before doing, think after you are done, observe and make corrections). All the process that I know of, are about these concepts, wrapping them in different objects, providing different interfaces to the user. Though the ideas behind stay as simple.

If we all use the same underline ides why some are successful and some are not? The answer is not in the plane of definition but it's in the plane of implementation. Do you remember that saying above? ;) The process is not a panacea. If you expect that a weak, diseased organization wearing a CMM hat will do much better then you are deadly wrong. People are what make processes to work or to fail. People are undermining it by doing thing "slightly different" or doing "just opposite" only because they think that they know better. Just look around and see if there are any of those characters right beside you. And start correcting things right away. Start from yourself ;)

P.S. I am not saying that all the processes are equally feasible for all teams. No. I only was speaking that nearly all the processes are GOOD and COULD HAVE BEEN WORKING if they were applied correctly. Anyway, the process is always better than no process at all. If it's your case, you know where to start! :)

Tuesday, March 2, 2010

Three simple ideas on management

Today I had an interesting conversation with a manager who is in charge of testing at some world famous organization here in Minsk. She told me that she is about to go on long vacation, so she needs to teach one of her champions how to cope with things while she is out.

One interesting thing I noted is that she realized how much she is of a manager only after trying to teach someone else. Well, this is a known old effect when one can only learn how she knows something by trying to teach someone else. Despite we believe we know something we may have troubles explaining it because the knowledge is scarce and has gray areas, things that we never knew but nonetheless important to get the whole idea. And otherwise, for some of us it is important to test the knowledge by teaching others. This what she did and how she got to know that she is much more powerful a manager than she realized. Try yourself ;)

Another thing we touched is management and who is capable of doing it right. We both come from the same organization where she was my team leader. We know our managers well enough and went into discussing their strong and weak points. One of the most discouraging I found is blaming someon else in the failure. Manager is responsible for all work created by the team. So there is no sense and even more damage to the image in trying to guard against the blame with the subordinates. The relationships between manager and subordinates are built upon trust and openness. If the former blames the latter then this link break free, so there is no normal relationships after that.

The third interesting topic of discussion was aptitude. We both agreed that a person is on rightful place in the organization needs only slight attention and rare correction from a manager. Ijn contrary, a person who misbehaves often and requires a lot of attention from managers will only become a greater distracter in the future. So, finding the right place is very important. No matter who you are employer or an employee.

If you are not at the right place move on. Don’t waste time, yours and that of a manager! :)

Saturday, January 16, 2010

A silly question: why testing takes long?

Recently I came across an amusing post on one of professional forums dedicated to testing. The post was an elaborated description of how to convince management that testing needs so much time to complete. The author worked his way beautifully introducing parameters and formulas, ezplaining assumptions and proving theories with examples. Great job - no doubt! But... it's all useless :)

No manager will care to read it through and through. None!

The mere fact that the management needs to be convinced in things that SHALL BE OBVIOUS is a problem per se. With that multi-pages work author only supposed to solve the symptoms instead of targeting the disease root-cause.

What could be the root cause for the managers to doubt that testing team work efficiently? What managers need from testing to be in order to feel comfortable about its performance? I am sure you guessed right :) All they expect from your team is VISIBILITY. Just let them see what it takes to define strategy, make required environmental preparations, procure and learn tools, create tests, combine suites, execute tests, submit defects, work with fixed and rejected defects, and so on. I you manage to build a transparent process that everyone can watch in the motion you will never ever be asked to prove that you spend your resource cycles effectively.

Another important issue is getting management involved in taking all important decisions. Make them not just supervisors but active contributors. Share with managers all the important decisions, discuss and argue your position. Let them help you with their experience as well as let them see your professional level by providing them technical assistance. Once a decision is not just your but theirs as well they start feeling much better :)

In short, this is all you need to make sure that you are never asked that silly question - "Why does testing take so long?!"