Friday, August 28, 2009

Test design: do it right the first time!

If you are in software testing field you probably noted how difficult it is to grasp the idea behind the test cases when all you see is a collection of steps presented for review. This is too easy to get bogged down into step details. You may have even got this impression "Wait a minute! I wanted to review what is supposed to be tested instead of reading individual steps trying to compile them into logical test sequences!" If you did – then read on. Some advises may prove to be useful for you. If you never experienced such issue read on anyway. The fact you never noted the problem does not mean it cannot not hit you ;) Remember, this is not what you know will kill you.

The solution to the problem is of the same sort as we used to apply in other types of activities - early validation. So, you need the means to early identify problems with your design, until it gets "hardcoded" into written form, with steps and test data. The question remains – how to present testing ideas in a short and convenient for review form? There are a couple of ideas. Both proved to be quite successful in my experience.

The first idea is to generate a list of test conditions or test requirements before starting to generate test cases. The list can be presented in form of a tree:

- file types:
---- doc
---- txt
---- pdf
---- xml
- file content:
---- text
---- pictures
---- text + pictures
- file size:
---- large
---- empty

The time needed to generate such a list is negligible. You don't even need to bother about test data at this point. The whole idea you put behind the testing may be wrong, so time spent for creating complex test files will be a total waste.

Tree forms allows for easy review. It helps providing your thoughts in a structured way and keeps reader's attention at one context at a time, thus improving the quality of analysis. Seeing such a list one may easily find out that there are no tests for specific conditions (text in UT8 format for example). The only problem with this approach is that you do not tell how you are going to validate those conditions.

Another way is providing an overview along with your test design. It works as well to better review process, but it does not help against rework. If you have chosen wrong way in your design, you will need to re-develop your test cases.

Overview is written for the reviewer. It shall answer the following questions:

- What is supposed to be tested?
- How is it supposed to be tested?
- What conditions are to be tried out?
- How the results are going to verified?
- Whether or not automated testing will be used?
- Risks and limitations of selected testing strategy.

Having the answers on these questions will allow a reviewer to easily grasp the ideas behind the testing and to validate them.

Test design review is a hard task because there is a lot of a detail. The main goal of review is not to make sure details are correct. We are much more interested to see if testing design is adequate to get confidence that, once it pass, we have a quality product. There is no need to provide obscuring details to review. The early you present your work to review and the easiest it will be to understand it - the best quality and productivity you can achieve.

Wednesday, August 19, 2009

Resistance to process changes

They believe this is in the nature of people to resist changes, especially when everything is ok. Things go well, so why bothering with changes. The truth is that those who simply do things as usually never achieve great goals. If a company gets complacent it may even die, overruled by concurrent who took on the challenges.

Similar rule works for quality assurance, defect prevention, and process improvements in the organization. I always face resistance when I speak of the need in code design, review, and unit testing. Even most persuading words may break down upon the blind "prove me why I should be doing it". This is very discouraging when people nodding heads in agreement on the meeting, who look as if to buy the idea, allow it silently die a month after.

This is not reasonable to believe that another preaching session in a while will change things for better. Repetition is good but it's not enough to foster right attitude toward the case. People who are responsible for using the change in the process must believe the changes are needed. And what is more important they need to believe that they need it personally to do their job better. If you fail convincing them in it, there is little change your ideas will get due support. Another necessary thing is management contribution. Management shall keep insisting on following best practices. They need to patiently explain what the purpose of a process is and how it is going to help the team to reach the goal.

Only buy-in from performers and consistent message from the management will do the job. If one of these is absent the initiative will die a silent death.

Tuesday, August 11, 2009

Interviewing candidates

I have done it many times. A lot of lessons were learned. However, even now, from the heights of experience, I realize that I still can fail. The reason is that people are very creative if it comes to making impression. The more intelligent a person who sits against you in the interview, the more difficult it is to probe the true qualities of that person. Being smart is just one of qualities that you look in candidates. It defines aptitude to some degree. However, this is far not all that you may need to find out.

If a person is smart enough he or she may tell you rather what you want to hear, not what he or she is thinking. So, in result, you may get a wrong image of a candidate. Such a hire, if made, may prove to be wrong not only for you, but for a hired person as well. So, it makes sense to put it clear right before interview that this is not only the examination of person's qualities but also a test if a person fits the position. The later will be evident in a while. So, you both better not to waste time and to be honest. A candidate must be honest about achievements and experience as well as you must be honest in your responses (overtime work, salary increases, etc.). Lie leads to another lie. Don't start your relationships in the wrong way.
Now back to the interview… I used to think of candidates in terms of three A (Attitude, Aptitude, and Achievements). Fist two “A” are most important for newbie candidates. The last one is extremely important for a senior position.

Attitude is how a person sees his or her future in your company. This is a measure of desire to work in your team, the level of loyalty. And this is what candidates are dishonest about most of the time. So, do not believe just words. Look at the reaction of a person to specific questions. If there is lie you should guess it from the looks and gestures with easy.

Aptitude is how candidate's skills cover requirements of a position. Some things can be caught up at easy. Some can't. The difference is crucial. If there is anything in a candidate that makes you believe he or she will not be 100% efficient in a new role and if you are not sure you can fix that - never hire such a candidate.

Achievement is demonstration of the skills at the tasks similar to one you want a candidate to perform in the future. If the tasks are different and he or she will have to do incomparably different things then you better consider him or her to be a newbie.

Hiring newbie is more complicated because you simply do not know what to ask. Usually we ask people to describe what they did in the past and drive their talking with questions. A newbie do not have achievements to talk about, so you need to use standard cliché questions like "why you have chosen this domain?", "what have you read on this?", "what do you want to achieve in 5 years from now?", "what are you ambitions?" and so on. But those questions will reveal just a bit of a candidate. To learn more, I used to give them short tests, which allow me judge upon professional qualities. I use these tests for more than 10 years and have never regret about it.

What a test may be? Well, for a software tester this is a quick test to produce as many as possible tests for a simple function of a well known and broadly used software product. Calculator or Notepad will do just perfect. The answer not only indicates candidate’s natural ability to create test cases, it also provides you some feedback on how good a person is at using the software at all. You may guess it by the depth of the testing a candidate is generating. One may come up with tests for Undo operation, another may have it omitted.

Such tests can be developed for any position. For a developer it can be a test task to design a simple system using OOP, or to write a piece of code on a paper. For a technical writer it may be a quick description of some software function. System architect shall be able to come up with a solution to the architectural problem that you faced in the past. Or it may be just a sketchy design of a system described right at the meeting (chat machine of online magazine, for example).

Yes, such tests will require your time and time of a candidate. But you will hardly be able to gauge person’s abilities without it. Only practical task show how good candidate is at work that you want him or her to do in the future.

Of course, aside of above, there are still a lot of things to look at. How a person will be able to meet deadlines? Is the level of responsibility is high enough? Can this person learn new things quickly? Is there any problem with communication? And so on.

Sorry for this a bit clumsy but hopefully useful piece of information. I will get to it back when I have time to polish it.

Walking problem

What if one of your team members appears to be a problem? He or she may not fit into the team and having such a person among the others may prove to be too risky. People are getting infected with examples of bad behavior way too fast. (I wish they did as well mimicking something good, *sigh*)

The first advice is not to lay it off for too long. Yes. This is harsh business but it has to be done. Otherwise what kind of a manager you are? I usually start fro, talking to a person explaining him or her: what is wrong with the behavior and how it affects the team. Once a person is aware of the problem he or she can do something about it. In this respect, this is a service to a person that you do.

It may not be as easy though. Some people take critics personally. It may make your efforts all in vain. It helps to explain that the reason you are talking to your team mate now is not to "put in place" but to help, help to grow, help to get better.

Mostly, things are looking too good from the inside. We all fool ourselves that we work as well as it can only be. The reality is a little bit different. The extent in which it differs depends on our inner qualities. Some assess their achievements with due criticism. Some think that they do much better than others having no real reason to think so.

Your goal is to provide objective feedback on the achievements. Do not forget to mention good ones. It will make your team mate more open to discussion. And it's always better to start meeting from talking about positive achievements. Remember, you goal is a constructive discussion. Having someone in bad resistive mood will make things much more complicated.

Once you delivered your viewpoint explain the consequences. I usually make it clear if I am going to let a person go in case there is no progress on the improvement plan. This is tough but fair. Do not try to make corners flat. Just put it clear.

Then sit together and define strategy of how to make it all better for a person and for the team. Once you have a plan, define deadlines and metrics that will help making sure you have achieved the goals.

Following these simple recommendations may save you a lot of time and nerves.

Friday, August 7, 2009

"Easy money"

Who of us was not dreaming of easy way to earn money? Easy means that with little effort we get a big benefit. Well, I have one such idea for software developers.

It is laying right in front of your eyes. It is easy and does not require long learning curve. It is all automated. And all you need to do is to press a button. Then sit back and wait for a sophisticated report. And this is not a joke!

Code analysis tools -- cheap, fast and easy way to learn what's wrong with your code. You need not to run through all the code manually to find that it's not up to standard, unsupportable, dangerous, or not reliable. All you need is to feed it to a code analysis tool. Such tools made a big step forward since they were simply searching patterns using regular expressions. Now they can find unreleased objects and connections to help developers exclude defects related to resources usage, nasty ones, difficult to find and isolate. They can measure complexity of code to warn of the code that will be difficult to maintain. They can produce nice reports, easy to understand. Below is just an example of such a report:



So, do not waste your time and run code analyzer against your code now. Then see what it says to you. On the first usage, you may find it too paranoid about small problems. No problem. Tune it up to bring up only serious issues. It's all customizable. However, like in case with MSDEV warnings do not try to fool self. Those things are reported not just out of blue. There is a reason behind. If you don't understand the reason then it's better to leave the warning as is.

Enjoyable code analysis! =)

Wednesday, August 5, 2009

Innovation blog

Unlike many blogs on innovation, this one is practical and eye-opening. If you are interested in topic of innovation I strongly recommend visiting it: http://www.business-strategy-innovation.com/innovation-blog.html

Enjoy!

Newsgroups, Web 2.0, communities... What is next?

There are several newsgroups that I used to visit in the past. Now they turned out into nothing. This is very unfortunate. It was a great source of first-hand information on how different things they preach on in the magazines work in reality. Besides, it was the major source of the professional news to me. There I leaned that such things like Agile, exploratory testing, model-based testing do exist. Magazines are good. But not as good as the information refined with more meaning when passed through the experience of many people. There I was involved in long discussions and hot clashes of ideologies when eXtreme programming started to grow from the ashes of waterfall development model.

There are several things that mostly contributed to professional newsgroup collapse. First and the most important is a flow of under-qualified newbie. They have collectively and very efficiently drowned the discussion with a whole lot of meaningless posts, too much trivial for a person having decent experience in the field. People who monitored the group to find something encouragingly new found them browsing through numerous posts with questions Google can answer better in a second. All the more the questions were all the same the next day. It was so terrible that many of those whose posts I enjoyed abandoned participation and never came back.

All the professional groups I used to read now are sandboxes for advertising, spam, and silly questions. This is a mix that can hardly fit a demanding professional taste.

The next reason is the growth of popularity of various web 2.0 blogs and social networks. Many discussions simply moved to a new place d'arm. Recently I came across a starting whole war aka Agile vs. traditional development at LinkedIn. I simply passed by. Through the numerous attempts to stop the train I learned better not to do this. One who can efficiently use best lessons that Agile and non-Agile development have learned will succeed. Those who believe there is only one way (mine and wrong) are doomed to fail. Time will settle the dust down and we all will see that we are still where we were but with a little bit more experience and knowledge than before.

So, newsgroups are near dead. Web 2.0 is abounding. Special networks are going the way of newsgroups (they are already an advertising platform). So, what is next down the road? What else can we come up to effectively share knowledge and ideas? Google wave? -- Anyone?

Tuesday, August 4, 2009

Scripted testing or free flying?

The more I work in testing domain, the more I learn how difficult it is to answer this question. On one hand we need to be very diligent and specific about things we are testing. We have to be ready to answer what we tested and how. We have to collect information on what was tested and how in order to do retrospective review. On another hand, many test professionals believe that the best way to find a defect is get into something like a trance, a "free flying" through the functionality. Like a dog sniffing its pray, tester searches defects following gut instincts. This is way too difficult to organize in form of a checklist or guidelines. You can hardly explain what it takes to ride a bicycle or jump with a parachute to a person new to those activities. All you can achieve is "I have no idea what you are talking about" response. Same is for the testing.

So what is the best way for you, you may ask? The answer cannot be universal for everyone because each team is unique. The way we used to work, the qualities of team members and peculiarities of a task are very important. I personally believe that a mix of both approaches is what works better for most of the organizations. But what kind of a mix should it be? One may try to script everything, including tests executed once (executed and throw away), like in medical industry. Others may not bother having such tests written down on the paper, allowing some time for their testers to do non-scripted testing fee of any bureaucracy. What mix works best for you is a matter of numerous trial and errors. Try different approaches and see what effect it may have. If it's a waste of time don't do it.

Unlike test design based on product requirements "free flying" testing is in-context activity, so it takes less efforts to generate ideas. When you do not have application to click buttons and try different combinations it is difficult to come up with ideas. Modeling software behavior in mind is resource-consuming task. Your mind simply has not enough free processor time required for the creativity.

In general, scripted testing adds more organization and clarity in testing process. It helps in making the testing process visible and controllable, thus helping to meet deadlines. Having no test scripted, you can hardly say how long it will take to run all tests, because you can only guess how many of them you are going to execute.

So, the best way of dealing about it on the level of testing schedule is generating scripted tests against requirements (and don't bother too much about very specific ways of using software at this point) and allow a day or two (depending on how big the functionality is) for "free flying" testing. These two days are better to be scheduled at the end of the testing stage. Prior scripted testing will help removing blocking issues as well it will lend your testers time to get familiar with new functionality.

All is in your hands. Only you can say what is best for your specific situation. Good luck!