Wednesday, July 29, 2009

Why do many performance appraisal initiatives fail?

The key to company success is people. Performers is what build tomorrow of every organization. The way they spend their working time today directly affects the company's outcome tomorrow. Many companies do not just rely on people themselves to drive personal performance. Not everyone has that nagging second self inside which does not allow staying complacent.

Almost all companies want to drive personal performance. Many companies think that they do. And just a few actually do it. Despite the idea is quite simple, implementation is what makes the difference.

There are almost infinite numbers of ways, in which a good endeavor can fail. Personal performance management is not an exception. Below are recommendations, failure scenarios and things to avoid while establishing a reliable performance management program. I have derived it from my own experience. It comes from the real world cases.

Process tips:
- Do it regularly, but at least once a year.
- Do it consistent throughout the organization.
- Gather information for the appraisal from more than one source (the more, the
better).
- Provide improvement plan within the appraisal.
- Track improvement plan execution.
- Meet in person to discuss the appraisal.

Failure scenarios:
- Lack of initiative on management side. Managers clearly do not understand why they need it. As the main driving force is down, the whole process collapses. In order to avoid, one needs to sell off the idea to managers, so they agree spending time for working on appraisals. Create a plan and demand following it strictly.
- Lack of credibility to what is stated in the appraisal on the side of the employee. In order to eliminate this factor, let employee know how many people participated in providing information for the appraisal.
- Attempting to create a Magic formula. Quantitative metrics (how many lines of code, defects, tests, etc.) are good source of information on performance. But those metrics need additional analysis. How complex a problem solved was? How big was the responsibility and time pressure? What was the quality? - Do not try adding more parameters to cover all those deviations! You’re either find yourself in the middle of near infinite number of scenarios or you will fool yourself with some compromise variant (one of many). Better stick to qualitative parameters (excellent, good, mediocre, bad) to assess the appraised features.

Things to avoid:
- Subjectivity is the killer. Gather information from as many sources as possible.
- Disagreement can severely undermine the efficiency. Make sure all disagreements between you and employee are settled.
- Taking it too lightly can let any good initiative die silently. Keep re-stating how important it is for the company and the goal the process is pursuing.

Following the tips above, having in mind failure scenarios, and avoiding traps will help you define the process that actually works. Good luck!

Tuesday, July 28, 2009

Disruptive business model or anecdotal boss?

Today I have read a good article on disruptive business model (http://feeds2.feedburner.com/business-strategy-innovation) that reads we should never allow neither ourselves nor our team to get complacent, to simply enjoy what we have instead of looking for way of how make it better. I absolutely agree with the author that putting right amount of pressure on the team creates additional innovation force within the organization. We need some strain not to feel comfortable. Overcoming obstacles of "never did that before" and "it isn’t going to work" is what fuels innovation. There is a big difference in results when people start looking for how to get there instead of whining about something being not possible.

However, there is clearly a room for a top management to misinterpret the idea. One may believe that the first part is not as important as the second. A CEO may think that the people who work for her must find the way of getting where she wants them to be and in time she wants. This is the beginning of the end. If you want your people to be inventive start from yourself. Show them the way. You are here not by chance. You took the position because you were superior and exemplary to colleagues. So, where is it now? Is it gone when you stepped up on a highest position? No, I don't believe that.

Instead of blindly pressing on your subordinates and complaining they did not do this and that try to organize a brainstorm sessions with them. Hear what they say! Do not try to play dirty games setting them off for failures just to replace them afterwards with a sigh "they are not up to our standards". When a person at so high a position starts acting this way, this is not going to stay not noted. People around you start seeing you as a person who can't hear what they say, so there is no difference what to say at all. After several unsuccessful tries to be heard they forget the idea and things start drifting to the pitiful end results.

A fool with a tool is still a fool. Having such a powerful instrument as disruptive business model implies responsibility for possible misuse. Do not fly too high. This is not king's court games that we are playing. Bring your ideas to the table, so everyone can see you are still a part of the team, not an icon or an idol.

Looking back makes a step forward

I'm a big fun of learning on own mistakes. I hate making the same mistake again. It makes me feel really bad. Repeating the mistake again can be a sign of lack of care for the end result. If one does care then he or she would mind to avoid running into the same problem twice. Sometimes we repeat our errors because of short memory. In this case I would suggest keep records of what went wrong and to buid a reliable guard against mistakes by changing the way we do things daily.

Same is for the process. Post mortem or retrospective reviews help remembering what went wrong on a project and give you insights what steps to take in order to avoid running into the same problem again in future. Just a look back may make a big difference in the way we do things. This is why I like it so much. With almost no efforts it provides plenty of ideas on possible changes. Here is the list of questions that I would ask to everyone who has been involved into the project:

1. What was good about the project?
2. What could be done better and how?
3. What was bad and how to avoid it in the future?

It also helps if you direct people thinking into concrete direction with questionnaire designed to elicit more details on the problems:

1. Did all the stakeholders have the same meaning of project goal?
2. Was documentation sufficient and clear?
3. Was system design profound enough? Did it make coding easier?
4. Did architecture help building a quality system?
5. Was change controlling mechanism effective?
6. Was the testing adequate?
7. Are you happy with defect tracking process?
8. Is the resulting product easy to support?
9. Was the planning accurate enough?

The list of possible questions can go on and on. This is up to you to decide if to mess with it. Every project is unique in the way things went wrong. Every team is unique in the way it fails. So, there is no universal list of questions. If one would try building it, it would be too general or too long. I recommend such list only for the beginning or to address a specific problem of retrospective review process (for example, people may tend skipping one of the processes from consideration).

The best way to generate list of improvements is to look back on the whole process from the very beginning step by step. Play it back from the moment you first heard about new product till the moment a release was sent out to end users.

It was always interesting to me how a retrospective look changes how thing look like. Some issues are being noted only if the whole process is played back. Thos thing may not even look like problems when you face them in day-to-day work. Sometimes we need the context that is yet to come to validate the decision we are making today.

Another very interesting effect is having issues indentified when you look at the whole process on a high level of abstraction. Thing that eat up several minutes a day may start looking not as negligible when assessed in perspective of several months of execution by ten people.

Do not let yourself fall a prey of excessive self-confidence. Your team may learn from mistakes as well as it may not. Make this process defined and visible. Share the ideas, implement the changes to the process and forget about the issues forever. This is much more reliable and natural than relying on everyone's personal memory.

Monday, July 27, 2009

Black sheep

Today things reminded me of a problem most of team happen to face from time to time. One team member who clearly misbehaves and takes goals and values of the team for nothing does not only contribute little but can even prevent others to contribute.

The problem is as serious that it shall be addressed immediately. The first thing to be done is trying to show that person how his actions impede team's success, how it looks like from the outside. If it does not help then the only option is to replace that person with someone understanding that team goals must be placed over the personal. In many cases personal goals are in conflict with those of the team.

My personal goal can be to browse Internet in the morning, whereas team goals tell me that I need to get the work done first. My personal goal would be making myself non-replaceable by concealing crucial information and experience. Me as a team member would do whatever possible to reduce risk for the team of being too dependent on me. I may get sick and there will be none capable of continuing what I was doing. The list of examples of such conflicts can go on and on. Do not anyone to show others the examples of wrongdoing. This is very contagious.

From my experience, it takes a lot of efforts of the wrongdoers to realize what it takes to be a part of the team, to serve a goal higher than what is on the mind just now. But until people get there one can hardly do anything about it. You better find another team member who will produce rather than complain.

Thursday, July 23, 2009

Good and bad about process improvements

Improvement is always something positive as it makes the processes better. However, process is a fine mechanism that needs delicate treatment. Many times the ideas that were deemed a quantum leap ended up with just slight changes in the quality of output. The reason why the reality does not fulfill our expectations is in relative complexity of the processes and the number of factors that influence the outcome.

There is also a problem with preciseness, with which one determines the root cause of a problem to be eliminated. For example, we have a big number of small UI defects and decide that additional check by developers will dramatically decrease the number of such issues escaping coding phase. However, we may be wrong about the actual root cause. It may be not due to erroneous development but rather due to obscure requirements. So, the actions we defined may not bring the desired results.

Most of ideas on process improvement come from the performers. In order to get the feedback you need to learn how to listen what people say. Encourage constructive criticism on the processes. Never give up the feeling of resistance to the critics even though what is being criticized is your precious creation.

Metrics is another big-big source of clues on improvements. They say measuring without a clear idea of the purpose is not a good idea. I would argue this statement. I have a very positive experience of collecting everything possible I could collect about the project with successful usage of that information in decision making and defining improvement program. So, if it takes just a bit of time do not hesitate to jog numbers down. Who knows what application you may find for it tomorrow? Everybody knows that having historical data is a key to every performance and quality tracking program. The sooner you start collecting your data, the better. But please do not pay too much of your time to it. Else you will not be able to do your job!

Improvements should be carefully estimated and prioritized. All the more, improvements shall be carefully planned. And here you have the most difficult problem - resources. This is normal that all company's resources are busy on tasks directly contributing to revenue. And it may be difficult to detract your colleagues for working on process improvements. In my experience, most of good improvements get killed by this limitation. Here you have two options: procure resources by communicating goals and ROI to top-management; find enthusiasts who will do this in their spare time. The latter is not likely to happen so do not rely on it too much. I knew a manager who always appealed to enthusiasm, whereas everyone was reading "overtime" between his words. I always wondered why did hi not call things their real names? %)

Once you have a plan mind to define measures that will tell you whether you actually changed anything, and whether the changes were for good. Metrics are vital in every process improvement program. Else you will definitely persuade yourself that what you did was good, no matter of the real end result :)

Good luck in your improvements!

Monday, July 20, 2009

Listen With Your Eyes

I came accross an interesting study that indicates that verbal comminucation is far not the most effective way of coneying information. It turned out that our emotions and gestures can be more informative than words.

Here is the excerpt from the source:

"One study at UCLA indicated that up to 93 percent of communication effectiveness is determined by nonverbal cues. Another study indicated that the impact of a performance was determined 7 percent by the words used, 38 percent by voice quality, and 55 percent by the nonverbal communication."

Read more at: http://humanresources.about.com/od/interpersonalcommunicatio1/a/nonverbal_com.htm.

Automated tests: the sooner the better!

This is the most impressive experience I gained in my entire professional career. Read on and you will probably find it worth of trying.

Several years ago we faced the need to dramatically increase test automation coverage. Let the reasons be aside for now and let's focus on the problem per se. We had complex UI-driven software, full of custom controls and volatile UI elements changed from release to release. Our experience in test automation was refrained to the collection of several hundred tests that we occasionally used for smoke and regression testing. In order to get where we needed to be we had to automate ten thousand tests in a very short period of time, 1-2 years.

At the first glance the problem had no solution. It was really hard to be in that situation, when you see no way out. But we started the project having no idea if it's going to be successful. We started from optimizing of what we already can do, divided team in two sub-teams (one team focused on test design and manual execution, another took responsibility for test automation). We also introduced all the best practices that helped us to keep test production and maintenance cost low.

But this was not enough. Automation team was always behind testing process because they needed time to implement and debug test code on a working system. With insignificant exceptions, so, we could not use automated tests for functional test execution during the production cycle. We could only use it for regression testing of future versions. Tests still needed to be executed manually at least once. It had severe impact on testing schedule and drug us back because manual team was always on a critical path.

Then we’ve got an idea. It was as easy as it could only be. As far as implementation phase is what makes automation team fall behind we have to shorten it somehow. Adding resources is not an option. We already had several people working on this project and adding more people could severely increase overhead cost of communication. We went anther way.

Instead of adding resources we decided to move a part of test automation preparation task back in time, to do them in advance. We decided to do design of automated tests long before a working version is made available. Design means that we created fish bones of tests. Those tests used virtual functions that, when implemented, allow manipulating application features. Functions remained not implemented until we have a working version of a product in our hands. So, the tests could not be debugged until that time. But the most of design work is already done. And our experience indicated that this part is very significant.

When a new version of application comes to testing, automation engineers started implementation phase. They simply added code to the helper functions to make the test work. After that they run and debugged tests.

I will demonstrate how it worked in example:

1. We need to test a web search system that allows users to run the search, browse results and bookmark interesting findings.

2. Automation engineers select tests for automation. For example they have selected the following tests:

a. Different types of queries ("", "my search", "very-very-very long string").
b. Browse results (pages 1 through 10).

3. Automation team has steps of tests and test data defined in the test description created by manual team. So, thy create test architecture design like this:

test 1 - Different types of queries

test01_Search (String query, Integer expected) {
login();
doSearch(query);
assertEqual(getResults().totalCount, expected);
}

Used functions login(), doSearch(), and getResults() are not implemented so far! We have only figured out which functions we will need to enable our tests work.

Note: In order to do it safely it is recommended to insert a string of code that will fail your tests until function is not implemented, like this:

function doSearch(String query) {
fail('Not implemented');
}

Test that go through pages of results could be looking like follows:

test02_Paging (String query, Integer expected) {
String[] pageTokens = {"page1", ..."page 10"};
login();
doSearch(query);
for (int i=1; i<=10; i++) {
goToPage(i);
assertEqual(getPageToken(), pageTokens[i]);
}
}

Same way we can Design all tests that we selected for automation in advance. Thus we are saving this time from implementation and shorten the duration of automation. As practice had shown we can save up to 50% of time allocated for test automation. Roughly assessing, design phase is accounted for a half time allocated for the automation. So we shorten the second phase in a half, allowing automated tests to be ready sooner in the testing cycle.

Using this technology we have achieved the ratio of tests executed manually and automatically 1:1. This means that only a half of tests were executed manually each new release. Another half of tests were executed as automated right away. This had greatly increased automation ROI because we had no need to execute them manually at all and saved up to 40% of manual testing resources each release. Additionally, automated tests could be used for in-project regression testing much earlier. It also helped to get much of benefit from the idea.

In general, this approach had completely changed the role automated testing played in our project, making it at least as important and effective as manual testing.

Hope this helps making your test automation effort more fun! :) Feel free to comment should you have questions on details of implementation or should you have risks that may prevent you from having it work for you.

Friday, July 17, 2009

Team or group?

I heard the opinion that most of work units that we used to call teams are actually groups. One of important team's features is having universal performers, so as if one person is out the work can still be done by another. This is not a case for the groups, when people tend to specialize in some area. Replacing such key people with other is always a problem because the knowledge and skills they accumulated are unique and it takes time to train someone else.

Another difference between teams and group is structure. Teams do not have defined leaders; there are no superior and inferior members.

Modern software developer processes tend to favor teams over groups. Agile encourages cross-training, sharing goals and discourages traditional management practices because they divide people into “us” and “them”.

So, what is better? I can't say for sure because the answer will depend on what you are trying to achieve, who work for you and the way people are motivated. If you can share the success of a project evenly then it would be great to have a team. If there are different levels of responsibility defined and if shares will depend on those levels then you need to stick to group organization.

I am sure that if done rightly teams and groups can be similarly successful.

Tuesday, July 14, 2009

Personal performance monitoring and management

We all see the world with obscure eyes. We believe that people around us think and act same way as we do. The truth is that we are all different. What we see to be a problem can be accepted by another person as a normal behavior. The art of team performance management is in making all facts of misbehavior noted and corrected.

The first step on this path is giving another perspective to a person. For the sake of experiment ask your manager to jog down 3 strengths and 3 weaknesses in your professional behavior. Most probably you'll be surprised with results. The difference comes from different mindset, different criteria of assessment and different priorities used by you and your manager. Having a look from the outside is always helpful in determining where to direct your self-improvement.

Performance appraisals are written to provide independent, unbiased, and detailed enough perspective of someone's achievements, strengths, and weaknesses. It helps looking at all these things through the prism of another system of values. The best result is achieved is the system of values not that of a manager but that of a team. This is not as easy because performance appraisals are written by managers. But this is possible.

In order to define proper system of values, or a measure, you need to define what your team needs more in order to reach its goals. Find out qualities, which, if excelled by individuals, will get you closed to the goal. Here is my list, for example:

- Qualification
- Quality of work
- Quantity of work
- Responsibility
- Timeliness (ability to meet deadlines)
- Communication
- Initiative
- Creativity
- Flexibility
- Learning from mistakes

You may use this or you define your own list of qualities. It works best if defining such a list is a collective effort.

After we defined the list of qualities you may start giving marks to team members. Most of errors in implementation are made at this stage. Managers who perform assessment may provide subjective mark that does not only serve the purpose badly, it also may be counter-productive. There is no mistake worse in the management than under-valuing someone. People feel such things very keenly and take it very closely.

So, in order to use a team-oriented system of values we need to collect information from the team. Ask peers to fill in assessment form for a person. In parallel, write your own assessment and them compare one against another. And not only that... The difference you find should be explained!

Once you have finished adjustment, you are ready to write an unbiased team oriented assessment. While writing it keep in mind that you may face resistance to accept some things in it o the side of a person, who is the subject to the appraisal. Make sure it has examples of good and bad behavior highlighted in it. If you have difficulties in remembering such examples, ask your peer reviewers to provide them.

Reading appraisal to a person is another possibility to screw it up. Do not make it painful. Explain that the reason of it is to help your team mate to grow, to improve. Make him or her feeling comfortable with a joke or two. But do not go too far. Do not make look like a farce. This is a serious business, a team's priority one.

While describing and reading assessment start from the positive achievements like this: "You did very well at writing ABC driver. It was a complex task, new to you. Anyway you performed it on a very high professional level. However, the depth of testing was not enough. The defects discovered after you by testers where mostly of unit type. A least 10 of 15 defects could have been removed if you performed unit testing well enough".

And the last but not least, the marks! What is it going to be? I prefer qualitative marks:

- Superior
- Sufficient
- Insufficient

Superior means that a person can be an example for others on the team. Sufficient and Insufficient speak for themselves.

Such marks can be set against qualities for your team mates and can be used in team ranking or even for comparing your team's people against other teams in the company. In result of such comparison one may come up with enterprise level ranking for all employees - a dream of every CEO :)

Hope this helps and have a nice assessment!

Ho to obtain good estimates?

Today, we will speak about eliciting estimates from people. It would make the task easier if your peers who oppose you in this matter would read "how to make good estimates?" first. Even if they didn't this is in your hands to lead them the right path.

First of all make sure the task is understood by the performer. Ask him or her to paraphrase the assignment to you, so you can correct it until it's too late. This is also important to help the performer understand what other task performed in the past could be used for comparison or verification of the estimate. Sit down together and find what you did before that can be measured up against a new task (similar project, task). However, be careful to use examples more than 3 times bigger or smaller in size.

If you don't have examples and the task is completely new to a performer then define the plan how to analyze the problem (investigation, prototyping, consulting gurus, etc.). One of most important parts of this stage is having a concrete goal, a milestone when the investigation will complete, or at least a date of its completion will be known.

Once you have got first estimate do not accept it as is. Question it. People tend to take estimates too optimistic, so they have troubled in meeting them afterwards. In order not to fall prey of wishful thinking, start asking "what if?" question. What is that part will not be delivered to you for testing at the date you assumed? What is if it will take longer to test your code? I have seen that testing of such modules takes up to 30% of development time. What if you have no product requirements finished at the desired date? What else you can do to mitigate that risk? Asking such questions will help you analyze different scenarios of "how it may go wrong". Such scenarios are usually neglected by people who used to think overly optimistically.

This is also helpful to build your own estimates or to ask another expert to provide a variant of the estimation. Comparing alternatives is a key to finding out what is missed from consideration. Creating estimates using different parameters is another way of drawing alternatives for the analysis (calculate time needed to test out of development hours vs. the same parameter calculated from the number of requirement). Never use average, or some percent of several estimates. This is completely wrong because our goal is to find why the estimates are different, so we can correct them with new information that has been missed from consideration during the initial estimation.

This is it! It's pretty easy and straightforward process :)

As for the methods of estimation I recommend you reading "How much does a project cost?" by Steve McConnel.

Monday, July 13, 2009

How to make good estimates?

All of us faced the problem of giving estimates. The problem here is that sooner or later we have to meet the milestones planned out of our estimates. When we can't - we get our due portion of frustration from managers.

Estimation is a projection. In order to do it precisely you need to know of the task as much as possible in advance. If a task is new to you then you hardly be able to predict all the details with due preciseness. Best estimates are made for the task that we know well. The more we know of the task, the more precise our estimate can be.

We can compare the task against other known task we performed in the past or we can just use our experience with similar tasks. If we never did anything alike before then it makes sense to do research or a sample that will provide you with clues on how long it make take. For example, you need to translate all the code from Visual Basic into .NET but you didn't do that before. Before giving estimates, you can select some piece of code for a sample, port it to .NET and extrapolate the result to the whole bunch of code.

Estimate is the measure to something that did not happen yet. When we talk about future we always consider different variants how things may go. Many defects created by newbie, new development environment or even power supply may bring us some surprises. As far as we assume and measure future, every estimate implies probability. Point estimate (like 8 or 10 days) has almost 100% probability to fail (because the actual work can take 7.5 days or 11 days). So, this is not very unwise to give point estimates. Make it a range and share your vision on probability. Whatever you chose for a range of values it should be big enough to make you confident you can make it.

For example, if I would be asked to provide estimate on the weight of a space ship (that I have no idea about) I would provide a very rough estimate. Rough means that the range would be really big (50 to 200 tons).

Rough estimates are not what will make your manager happy. Actually he or she will be a bit confused and perplexed by it. They can't use it for planning. To resolve his or her confusion you need to express what you are going to do about it. Tell your manager what other information or what prototyping or online investigation you have to do to learn more about the problem.

As I have written above, the more we learn of the task the more precisely we can project on its pace and timing. Do not hesitate to ask for help. Managers can also be useful ;) They can help you finding the required hardware or procure tools. They can even suggest who to ask for the consultation on a specific problem. Inform you manager what else you need to learn about the task before you can make a more precise estimate. Also, mention when you can revise your estimate so a manager can adjust his or her plans accordingly.

A good estimate may undergo several revisions before it becomes reliable. A reliable estimate is what you believe you can make almost for sure. "Almost" is for the contingencies that could hardly be predicted in advance (power down, system crash, network problems, illness of key people, etc.).

In order to improve, keep tracking the preciseness of estimates on different stages (project initiation, design, implementation, testing, and maintenance). Analyze deviations to find out what to focus on in the future. Achieving preciseness of 10% is considered really good.

Following these recommedation you will find out pretty soon that making estimates is not a pain but fun. Good luck!

P.S. Next post will be on how to elicit good estimates. Stay tuned!

Friday, July 10, 2009

Microsoft vs. Google

Growing net book market is what both MS and Google are targeting with their new operating systems. Both companies believe in their projections that this opportunity will grow significantly in the nearest time. Even now few can imagine what they can do with a computer without Internet.

MS is ahead but Google seems to be in better shape for a long race. Google controls a big part of Internet and it makes it easy to enter the market. So, MS brand and popularity will not be such a big deal. Google will also open codes of their system. They believe it will help closing the gap in number of software solutions available for their system pretty fast.

Besides Microsoft has an advantage on this market Google can singnificantly undermine their position on the market of OS for mobile Internet devices.

Anyways, it's going to be a triller. End users will inevitably win from this competition.

Wednesday, July 8, 2009

How much does quality cost?

This is natural to think that in order to get the perfection one needs to pay for it. This is really hard to argue. However, what comes to quality, things may look not like they seem at the first glance. Quality may cost you additional time spent on operation that you would not do still producing a good product. But we all are humans and are not perfect in sense of quality of our work. In other words, we do make mistakes. So, we need all these additional measurements within the production cycle that allow us to see if the product complies to standards and requirements.

But when we talk about time and efforts spent on following procedures which only purpose is not letting us making the errors we should not forget about savings that we get using them. We have introduced all those practices not just in case but in response to problems we had in the past, in response to defects introduced or missed in the previous production cycles. Having deprived of them is going to save our time and resources!

The formula of quality cost includes not only explicit expenses on following quality processes but also implicit savings we get from following them. If you do not have savings then why bothering to do unnecessary actions? Even if process changes are introduced "just in case", we still do it to decrease the risk of having some problems in the future (lost user data -> frustrated clients -> bad reputation -> drop in sales).

In any case, if we make changes to the process we believe that we are going to have some saving, be it lower number of defects found at system testing, or fewer calls to customer support. So, we can infer that quality does not cost us money. Instead, it saves us money!

If you are still not convinced that introducing quality practices does not cost you a coin, just look at two automobile vendors: Alfa Romeo and Toyota. Both cars can be in same prices niche, both provide similar functionality and cost almost the same, but the difference in quality is huge. It's remarkable that Toyota does not spend more money for producing its cars than Alfa Romeo because they have a good quality system installed. So, they spend more at the early stages of development to save time at the later stages.

In order to create such a system in your company just look at the history of previous projects. Analyze defects that escaped development and testing phases. Try to estimate cost of one fix at different phases, and introduce practices that will eliminate most popular mistakes. You can easily calculate savings you will get from introducing those practices. Then simply compare it to the additional cost related to following those practices. I am sure the benefit will totally overweight the overhead expenses. If it doesn't just write me back. I am interested in that sort of anomalies ;)

Tuesday, July 7, 2009

I can generate all tests! Wow!

Today I partook in Beta program for a test generation tool. This tool accepts the list of parameters with possible values for the input and generates combinations, which represented virtually complete coverage (all values of all parameters represented at least once).

Despite the idea is good, the actual use of such tool is in doubt. Of dozens systems that we tested I could name one or two where we could apply this idea. The idea of cutting testing cost by combining test input is not new. I have got acquainted with it while reading Marick's "The craft of software testing" in the end of 20th century. Author tried to minimize test count by carefully combining input values, taking into account dependencies between them. It was beautiful but it proved to be useless in the real practice. The time spent on optimizing tests can be easily spent running more tests (including those, removed with optimization).

Another problem with the idea is that combinations cannot be seen from mathematical point of view only. There are other factors that can affect selection of combinations to try. We may know that testing under a specific browser is more important because it is very likely that application can fail in it. Whereas testing under another browser can be minimal because we know that all development is performed using it. So, the context does matter and test generation tools shall take it into account or they will stay as useless.

Saturday, July 4, 2009

Who to blame for bad quality?

"What a silly question?" a CEO would say and call for a quality manager: -"Bring me that looser. I told him more than once that this is not acceptable - to produce the software with critical defects. It hurts our business and stay in way of expanding it beyond the limits of Solar System."

Many times I saw how managers point their fingers to a person who bears word "quality" on the working title. But I saw few of the latter who could really do something. What is it then? A person who to blame no matter if he or she could do anything? A sacrifice goat whose fate is all pre-defined?

And what is more important - what could we, quality professionals, do about it?

Once I've been at a meeting with a big boss who very convincingly explained to us that of all the corners of a famous triangle (quality-feature-time), quality is the most important. "We never compromise quality!" he told. I was in love with that idea and was full of esteem to that person by then. But... within several releases I realized that those were just words. He did not think so. Every new release features prevailed until application started to look like a monster, a biting monster, because users were not happy when it crashed in their hands way too often.

So, who that boss should blame for the quality?

Before claiming the answer to the question "who is guilty" let's focus on solving the problems that we have in development and management that led to defects be injected into the code and not found during testing. After that, have people directly responsible for letting those issues go analyze the way they work and provide you concrete improvement actions which will help avoiding similar omissions in the future. But, doing so, keep it impersonal. Let them know that it's ok to make mistakes until they learn from them.

As for the question, if you want to blame someone start from you. Just ask yourself what would you do about the problem? How could you affect the process, so the issue is not injected or removed within the production cycle? If you are not perfect, you will always find something to improve, to grow. Answers to above questions will provide you with some insights.

Another very important thing is not to start finger pointing until you know the root cause of a problem. Make sure you have argument to prove someone guilty. Otherwise you risk to severely under-mine the moral of a person in whose performance and contribution you are interested. There is nothing worse than thinking whatever you do - you'll get blamed. This is a killer to motivation.

So, the short answer to the question is to start from you. The more in-depth answer would be: find out the actual reason, have people involved analyze the problem and come up with action plans, make corrections to the process.