Quality Testing

Quality is delighting customers

Software Testing Related Metrics


Software Testing Related Metrics

This Group discuss about Different Types of "Software Testing Metrics" which are used to measure the state of the software or the adequacy of the testing.

Website: http://www.qualitytesting.info/group/softwaretestingrelatedmetrics
Members: 85
Latest Activity: Mar 22, 2018


Discussion Forum

Web based Testing knowledge.

Started by PraNa. Last reply by pradeep s Aug 11, 2013. 4 Replies

Hi All,I have an interview where the requirement is for web based testing & i have not worked in the same.What should i expect at the interview?All your suggestions are valuable.Regards,PraNa.Continue

what is the testing matrix ?

Started by Kishore GP. Last reply by Swarupa Rani May 7, 2010. 1 Reply

Please explain

Comment Wall


You need to be a member of Software Testing Related Metrics to add comments!

Comment by Atulya Krishna Mishra on August 14, 2014 at 11:31am

Strictly not for promotion purpose, just to let the 'Agile Enthusiast' know about this carnival.
Super Early bird valid till 15th Aug Only. Hurry up to avail it.


Comment by Kiran on July 24, 2013 at 2:32pm
1) Test Process Metrics
The following are provided during the Test Preparation stage of the Test Phase:
• Test Preparation
- Number of Test Requirements Vs Functional Areas/Requirements (Test coverage)
- Number of Test Cases Planned Vs Ready for Execution
- Total Time Spent on Preparation Vs Estimated Time

The following are provided during the Test Execution stage of the Test Phase:
• Test Execution and Progress
- Number of Test Cases Executed Vs Test Cases Planned
- Number of Test Cases Passed, Failed and Blocked
- Total Number of Test Cases Passed by Functional Areas/Requirements
- Total Time Spent on Execution Vs Estimated Time

2) Test Product Metrics
• Bug Analysis
- Total Number of Bugs Raised and Closed per Period
- Total Number of Bugs Closed Vs Total Number of Bugs Re-Opened (Bounce Rate)
- Bug Distribution Totals by Severity per Period
- Bug Distribution Totals by Functional Areas/Requirements by Severity per Period
Comment by Thamini Hettiarachchi on July 4, 2012 at 5:11pm

Apart from the below mentioned metrices,  Compiler Warnings, Static Analysis Warnings, Maximum TCC (Highest Total Cyclomatic Complexity value in codebase), Code coverage % could be tracked.

Comment by Parag Panchal on July 14, 2011 at 1:10pm

I am looking for Defect Metrics Template in excel format does any one have if then please send it to me on my below mention mail id.






Parag Panchal

Comment by Wasim Meraj Hussain on June 29, 2011 at 2:12pm
Find engineering, medical, poly tech, management institutes @
Comment by Sudha Ankireddy on July 29, 2010 at 4:23pm
Test Metrics
Metrics are defined as “standards of measurement” and have long been used in the IT industry to indicate a method of measuring the effectiveness and efficiency of a particular activity within a project.

Also known as Software quality metrics.

There are several test metrics identified as part of the overall testing activity in order to track and measure the entire testing process. These test metrics are collected at each phase of the testing life cycle /SDLC and analyzed and appropriate process improvements are determined and implemented as a result of these test metrics that are constantly collected and evaluated as a parallel activity together with testing both for manual and automated testing irrespective of the type of application.

PROCESS related test metrics: - Defect removal efficiency, Review efficiency... etc, Test Case Efficiency, Test Efficiency, Test Effectiveness...etc

PRODUCT related test metrics: - Defect Density, Cumulative weighted defect density, Defect Severity index...etc

Objectives of Test Metrics
This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next release. This is an Organizational Measurement.

Test Metrics is a mechanism to know the effectiveness of the testing that can be measured quantitatively. It is a feedback mechanism to improve the Testing Process that is followed currently.

When to introduce Test Metrics?
1. Identifying Test Metrics is done at the beginning of the test project.
2. Test Metrics are collected at each phase of the testing.

What are the inputs for Test Metrics?
1. Quantity of test cases prepared / performed.
2. Quantity of Defects found.
3. Size of code developed in KLOC.

Formulae for frequently used Test Metrics
1.Defect Removal Efficiency (DRE) = (E/ E+D) x 100 where
E = Pre-delivery errors (detected during all QC / QA) activities
D= Post –delivery Defects
Objective: - Reduce Pre and Post Delivery Defects in all Deliveries. Indicates the efficiency of defect removal methods, as well as indirect measurement of the quality of the product.
Unit: - %

2. Review efficiency (RE) :- (No. of Defects found in Review) x 100 / Total No. of Defects found before Delivery (both Reviews and Testing)
Objective: - Reduce Pre-Delivery Defects.
Unit: - %

3. Defect Density : - Defects found/Size in KLOC
Objective: - This metric indicates the quality of the product under test. It can be used as a basis for estimating defects to be addressed in the next phase or the next version. Reduce defect leakage while coding/design.
Unit:-No. of Defects per unit size e.g.:- Defects per KLOC

4. Cumulative Weighted Defect Density: - No. Of weighted defects (review issues + testing defects) / Product Size (actual in KLOC)
Here Weighted defects = Major defects + (Minor efects)/3 (Trivial defects)/5
Objective: -To know the weightiness of the issues found during review and testing phase.
Unit: - No. of Defects per unit size

5. Defect Severity Index:- [Sum of (Defect * Severity Level)]/Total number of defects
Here A number is assigned against each severity level: 4 (Critical), 3 (Major), 2 (Medium), 1 (Minor)
Objective:-Provides a direct measurement of the quality of the product—specifically, reliability, fault tolerance and stability.
Unit: - no unit only a real number

6. Test Case Efficiency: - (Number of defects detected / Number of test cases run)* 100
Objective: -To know the efficiency of the test cases that are being executed in the testing phase. The quality of the test cases can be determined.
Unit: - %

7. Test Efficiency and Test Effectiveness:
These are two vital metrics, which always come to my mind, when I think about metrics.
Let us talk about the meaning of efficiency, most of the definitions state efficiency as the percentage of ratio of output to input of any system using unit measures (delivered /supplied). But I agree to the following definition.
Noun: The ratio of the effective or useful output to the total input in any system
Therefore, efficiency is an attribute which means to maximize the useful output for a given input reducing wastage or losses. Efficiency cannot be more than a 100%, in a sense that a 100% efficient system will have zero losses
Test Efficiency (TE)
A Consultant as part of any engagement would want to deliver, the solution on time, within budget, on specs, having an acceptable level of quality, quantified by the customer. To achieve this goal, the team should work with efficiency, which is to constantly show progress to the effort put in. Test efficiency is a quality attribute of the test team, to carry out all testing activities in an efficient manner saving cost and time. To mention few of test efficiency focus areas,
1. Resources
2. Tools
3. People
4. Process
5. Time

Let us look at the broader picture of the metric, test efficiency is not only about test execution alone, but all or most of test activities, like test planning, comprehension, test cases creation, review, execution, defect tracking and closure. TE is not just one single derivation but a number of calculations at each phase and activity of testing. What activities or phases one is interested in, what is particularly measured, depends on lot of other things, the type of project, complexity, availability of resources, the situation, customer requirement etc.
So TE similar to Test effectiveness (TEF) can be calculated for specific set of test activities. For e.g.: Test preparation efficiency will be the time taken for ‘X’ number of test cases to be prepared, reviewed and reworked to finalize them. There is a catch here, the quality standards of the test cases should be predefined by using defining standards, as I have tried to state a few of them here.
1. The Test cases are complete with respect to Use Cases on which they are based.
2. A tester should be able to execute this test case using only this test case and any directly referenced items given the proper software and hardware configuration.
3. Test data must be specific. For example, don’t say “select any menu option to navigate out of current page.” Say “Click the Back button.” Don’t leave any test data to the imagination of the tester.
4. Usually, each test step should contain a single action. E.g. “Save” and “Search” functionalities should be split into 2 steps
5. Test Case names follow the agreed upon naming convention
6. No grammatical mistakes

Let us assume that there are two test teams with equal number of resources with comparable skill set, functional expertise etc working on the same product, and If test team ‘A’ has prepared 400 Test cases with agreed quality, for the product in 5 days and for the same product if test team ‘B’ prepares 400 Test cases in 4 days, then which team can we say is efficient? Definitely team B is efficient, but there is no guarantee that Test team ‘B’ is effective, we are not sure how many defects can team ‘B’ uncover as compared to team ‘A’. That is our next topic…
Test Effectiveness (TEF)Efficacy in contrast to efficiency is focused to just produce the desired result or effect or achievement as such and not the resources or time spent.
Noun: Effectiveness means the capability of producing an effect.
Let us look at what Test effectiveness is, Test effectiveness of a technique or a system or a team is the ability to find defects and isolate them, from a product or deliverable. Test effectiveness is to ensure quality and close the two quality gaps, namely producer’s quality gap and customer’s quality gap. As definition of quality goes, quality is both process and product quality which is meeting customer requirements and conformance to product specification. These metrics should be quantified, as they closely relate to quality, and for many people the term quality is relative.
Can we take a simple example of calculating the test effectiveness during each testing phase?

Defect originated/Injected
Defect found in phase
Requirements Design UT IT ST Total
Requirements 2 2
Design 3 12 15
UT 2 1 22 25
IT 1 2 4 15 22
T 1 2 2 2 6 13
Production 1 1 1 2 3 8
Total 10 18 29 19 9 83

The table shows the defect origin, on the X-axis, where a defect was injected, where he belongs to? And on the Y-axis, where the defect was detected.
Let us calculate the test effectiveness of the Integration testing activity:
Total number of defects of all origin found during Integration testing activity = 22
Total number of defects existing while entering in IT = (10+18+29) – (2+15+25) = 15
Total number of defects injected in the current stage = 19
Effectiveness of IT test phase = Total defects found in this phase/ (No of defects existing + injected)
Effectiveness = 22/ (15+19) * 100 = 64.70%
Things to Remember
1. Keep Test Metrics Simple
2. Create Meaningful Metrics
3. Use Metrics to Manage the Project
4. Track Metrics
ConclusionIt is not enough to have a set of metrics that are tracked on a regular basis. The metrics must also be reviewed and analyzed regularly, as they can provide value feedback during and after a software development project.

AcronymDRE:-Defect removal efficiency
IT: - Integration Testing
KLOC: - Kilo Lines of Code
QC:-Quality control
QA: - Quality Assurance
RE: - Review Efficiency
SDLC:-Software Development Life Cycle
ST: - System Testing
TE: - Test Efficiency
TEF: - Test Effectiveness
UT: - UI Testing
Comment by Kiran on April 27, 2010 at 6:09pm
The Product Quality Measures:

1. Customer satisfaction index

This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:

•Number of system enhancement requests per year
•Number of maintenance fix requests per year
•User friendliness: call volume to customer service hotline
•User friendliness: training time per new user
•Number of product recalls or fix releases (software vendors)
•Number of production re-runs (in-house information systems groups)
2. Delivered defect quantities

They are normalized per function point (or per LOC) at product delivery (first 3 months or first year of operation) or Ongoing (per year of operation) by level of severity, by category or cause, e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect introduced by fixes, etc.

3. Responsiveness (turnaround time) to users

•Turnaround time for defect fixes, by level of severity
•Time for minor vs. major enhancements; actual vs. planned elapsed time
4. Product volatility

•Ratio of maintenance fixes (to repair the system & bring it into compliance with specifications), vs. enhancement requests (requests by users to enhance or change functionality)
5. Defect ratios

•Defects found after product delivery per function point.
•Defects found after product delivery per LOC
•Pre-delivery defects: annual post-delivery defects
•Defects per function point of the system modifications
6. Defect removal efficiency

•Number of post-release defects (found by clients in field operation), categorized by level of severity
•Ratio of defects found internally prior to release (via inspections and testing), as a percentage of all defects
•All defects include defects found internally plus externally (by customers) in the first year after product delivery
7. Complexity of delivered product

•McCabe's cyclomatic complexity counts across the system
•Halstead’s measure
•Card's design complexity measures
•Predicted defects and maintenance costs, based on complexity measures
8. Test coverage

•Breadth of functional coverage
•Percentage of paths, branches or conditions that were actually tested
•Percentage by criticality level: perceived level of risk of paths
•The ratio of the number of detected faults to the number of predicted faults.
9. Cost of defects

•Business losses per defect that occurs during operation
•Business interruption costs; costs of work-arounds
•Lost sales and lost goodwill
•Litigation costs resulting from defects
•Annual maintenance cost (per function point)
•Annual operating cost (per function point)
•Measurable damage to your boss's career
10. Costs of quality activities

•Costs of reviews, inspections and preventive measures
•Costs of test planning and preparation
•Costs of test execution, defect tracking, version and change control
•Costs of diagnostics, debugging and fixing
•Costs of tools and tool support
•Costs of test case library maintenance
•Costs of testing & QA education associated with the product
•Costs of monitoring and oversight by the QA organization (if separate from the development and test organizations)
11. Re-work

•Re-work effort (hours, as a percentage of the original coding hours)
•Re-worked LOC (source lines of code, as a percentage of the total delivered LOC)
•Re-worked software components (as a percentage of the total delivered components)
12. Reliability

•Availability (percentage of time a system is available, versus the time the system is needed to be available)
•Mean time between failure (MTBF).
•Man time to repair (MTTR)
•Reliability ratio (MTBF / MTTR)
•Number of product recalls or fix releases
•Number of production re-runs as a ratio of production runs
Comment by Kiran on April 27, 2010 at 6:08pm
Metrics for Evaluating Application System Testing:

Metric = Formula

Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents Lines of Code)

Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of Code).

Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

Defects per size = Defects detected / system size

Test cost (in %) = Cost of testing / total cost *100

Cost to locate defect = Cost of testing / the number of defects located

Achieving Budget = Actual cost of testing / Budgeted cost of testing

Defects detected in testing = Defects detected in testing / total system defects

Defects detected in production = Defects detected in production/system size

Quality of Testing = No of defects found during Testing/(No of defects found during testing + No of acceptance defects found after delivery) *100

Effectiveness of testing to business = Loss due to problems / total resources processed by the system.

System complaints = Number of third party complaints / number of transactions processed

Scale of Ten = Assessment of testing by giving rating in scale of 1 to 10

Source Code Analysis = Number of source code statements changed / total number of tests.

Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for Design and Documentation

Test Execution Productivity = No of Test cycles executed / Actual Effort for testing
Comment by Kiran on March 2, 2010 at 4:52pm
It covers, DRE and TEST Coverage Metrics..
Comment by Kiran on February 25, 2010 at 5:23pm
Hi Siva, We will Gather all useful and necessary metrics. will define how we can use them. Please Provide if u know any metrics Related to Testing process.

Members (85)


TTWT Magazine





© 2022   Created by Quality Testing.   Powered by

Badges  |  Report an Issue  |  Terms of Service