If you need to ground testing estimates on development estimates then read on.
This is definitely not the best way of producing testing estimation. It would be more correct to ask development and testing do their estimates independently. Later on you can use both estimates to analyze the difference and to arrange those properly.
In most of cases testing takes from 30% to 35% of development estimates. Taking 35% you will lend enough time for your team to complete the goals on time and with due quality.
But be careful! There can be tasks, which execution may run out of the initial estimates. For example:
• Performance testing
• Load testing
• Compatibility testing
• Testing on a real (big) amount of data or in real hardware
• Reliability testing
• Test automation
• Complex environment setup
• Complex business area (business context)
All above as well as the risks and any kind of exotic testing should be estimated separately and the result should be added to the initial rough estimates.
Here is the full algorithm as I see it:
1. Both developers and tester do their estimates separately.
2. Then estimates are compared and the difference is being explained.
3. If the ratio in estimates is as expected you are done.
4. If not - the difference is explained and a corrective actions are taken.
• Do not take the biggest "just in case" - this is inefficient.
• Do not take the "most trustful" - explain.
The biggest ratio I have ever met in my career was 41% of development efforts. It was due to heavy use of test automation and with complex test environment.
Happy estimation! :)