Metrics are all about creating actions around business goals. When it comes to metrics, there are numerous quality and testing possibilities. It is therefore important to carefully select ones that truly suit your situation. The ones that I have found most useful and cost-effective are discussed below.

Before committing to using any metric, always be prepared to perform the following five steps. If you are not willing to do all of them, then forget about using the metric.

  1. Define goals
  2. Define metrics
  3. Find data sources and Collect data
  4. Reveal trends
  5. Take Actions

Metrics don셳 produce business value unless they are related to legitimate business goals. If for example customer satisfaction is your business goal, then a metric that measures customer defects is a good choice and a programmer productivity metric is a poor choice.

Once you have a specific metric in mind, be certain you have known, clean data sources to support it. Collect data and then apply the metric to transform the data into information that reveals trends for decision making. Before committing to the metric, think through possible range of data values and determine in advance the decision criteria that will make sense for your business goals.

Finally, only commit to a metric if you are prepared to take actions based on the trend information. Metrics always require overhead. If you are not willing and able to take action, then don셳 bother wasting your team셲 energy on the metric. As you will see, there are many excellent metrics cited below. Some of them however are not good choices for many organizations, simply because acting on them is too expensive.

Three categories of metrics that are driven by business goals are
customer satisfaction, asset preservation, and
process efficiency and improvement

Here is a list of proven metrics grouped according to the business purpose they serve. Afterwards, a case study will reveal the subset that in my experience is most often useful and economical.

Metrics Definitions

Customer Satisfaction

  • Escaped Defects Customer reported defects that have escaped all software quality processes are represented in this metric.
  • Performance and Load The performance and load capabilities of the software are captured by this metric. The trend chart often includes a benchmark of predicted future customer performance requirements.
  • Scenario This metric represents test coverage of customer-specific configuration, profile-based, or end-to-end scenarios.

Asset Preservation

  • Asset Decay Based on the inventory of current software assets, this metric rates what part of them are not operational, non-conforming to maintenance standards, or otherwise degraded.
  • Asset vs. Demand Assets require investment to build and maintain them. This metric measures the trend of asset inventory against predicted future needs. This metric is particularly useful for budget justification.

Process Efficiency and Improvement

  • Completeness Using a baseline of tests required for the release, this metric tracks the current level and trend of actual testing.
  • Open Defects Internally reported defects are represented in this metric. Usually it is summarized in the form of a trend chart and is subdivided by severity.
  • Stability A trend metric that shows discovery rates as the release is progressing. A declining rate presumably indicates quality is improving.
  • Defect Classification The origin of each defect is classified according to its underlying process step or originating organization. Used for pinpointing repeating points of failure in the process.
  • Defect Correction Success This metric indicates what percentage of defects are repaired incorrectly. Used for process improvement.
  • Personnel Productivity For management interested in measuring and possibly increasing productivity, these metrics measure productivity of team members using tests as units of work.
  • Schedule Trend The metric shows a preplanned trajectory on a timeline and tracking using actual data. Used for project management.

Case Study

The following case study is based on my experiences at a company exceedingly committed to quality. The QA metrics were applied over many years, which enabled me to assess their long-term value to the business. The team structure included an offshore organization.

Challenges and Solutions

Customer Satisfaction

  • Escaped Defects Customer reported defects were always tracked. Cleanliness of the data was important here. Technical support screened the defects and removed those that were questions or usage errors.
  • Performance and Load Tracked the load in a test configuration against a predicted benchmark trend.
  • Scenario A focused, customer-specific set of configuration and end-to-end scenarios were specified and tracked.

Asset Preservation

  • Asset Decay Sizable amounts of code and tests were shipped offshore. Decay was steadily occurring, though nobody noticed it at first because no measurement methodology was in place.
  • Asset vs. Demand Modeling of asset requirement growth was undertaken every year in conjunction with annual budgeting.

Process Efficiency and Improvement

  • Completeness This was one of the most useful QA metrics especially when combined with the open defects metric. The trend information was always useful early on during a release, but diminished in reliability as the release date approached. Slowing of the trend line was simply a consequence of lock stepping with development during final defect correction. Nevertheless, the metric overall was very valuable.
  • Open Defects This was another powerful metric. The challenge here was to keep the defect data clean and relevant. Generally we found that the severity classifications were fine. However, defect data needed to be normalized according to its relevance to customers and release objectives. Defects that could be safely deferred had to be marked so that they didn셳 appear in the current metric. The house keeping overhead required for this metric was significant but worthwhile.
  • Stability We discovered that stability at the end of release was based more on stopping testing than the absence of defects. Therefore monitoring the test completion and open defect metrics proved to be just as relevant as this metric but cheaper.
  • Defect Classification The defect reporting system was enhanced to capture defect classification data. The biggest challenge was to find ways of getting the organization to respond to the conclusions.
  • Defect Correction Success Data collection was fairly simple. The trend showed the success rate to be steady. As with defect classification, the challenge was to find ways of getting the organization to respond for the sake of improvement.
  • Personnel Productivity This well-intentioned metric was ultimately a challenge to sell to the QA organization. It appeared to them to be intrusive and communicated a sense of mistrust on the part of the management team.
  • Schedule Trend The schedule trend metric makes sense to management because it shows a preplanned trajectory and tracking using actual data. The main challenge was always to construct a realistic projected trend line. Once actual data started appearing, the extrapolations were always suspect. Many factors at the end of the release could suddenly throw off an otherwise promising looking curve.

Results

  • The following four metrics ended up being on everyone셲 must-have list. They formed the core of operational metrics that supported customer satisfaction and process efficiency.
       Escaped Defects
       Performance and Load
       Completeness
       Open Defects
  • Defect classification metrics turned out to be academically interesting but not very useful, even though we had a very aggressive process improvement culture. Accurate data collection was the first hurdle. Initial defect classifications were often wrong and required reclassification. The next hurdle was even bigger. It was nearly impossible to get the organization to make the process and culture changes necessary to improve the metrics. Inertia and overhead were just too high.
  • After two years, defect classification was dropped in favor of a cheaper alternative, Root Cause Analysis (RCA). The advantage of RCA was that it used auditing and sampling, so it was not sensitive to data collection errors. Best of all, cultural inertia was far less of an issue. RCA could be applied very effectively by a small, motivated team to their own particular area of the software. This incremental approach served the interests of the business better.
  • Measuring asset decay was necessary because some code and tests were located offshore. However instead of metrics, our goals were accomplished more economically by auditing the code base just two to three times a year.
  • Even in cases where metrics processing was straightforward, the results weren셳 always good. Some metrics created behavior changes that didn셳 fundamentally benefit the business. So-called personnel productivity metrics made people conscious of how the metrics did or didn셳 operate in their favor. For example, all tests were counted as equally weighted by certain metrics. Consequently, some important tests, especially ones that were complex to configure and analyze, were avoided because they tended to lower a developer셲 measured productivity. We discontinued these metrics because they didn't fundamentally improve efficiency or customer satisfaction.

Ultimately, metrics are an invaluable tool for managing your QA operation. However, only commit to a metric when you can fully afford to support all five steps. Always be certain that the metric isn셳 creating counter-productive behavior. Finally, remember that for some metrics, you can achieve nearly comparable results using alternative methods such as auditing and root-cause analysis.

Share
Related Documents
  1. Future of Software Quality Assurance (SQA) (2287)
  2. Ki沼긩 tra ph梳쬷 m沼걅 (2171)
  3. Quality Assurance : QA Portal | QA Guide | QA Site (2141)
  4. Ki沼긩 th沼 v횪 ki沼긩 휃i沼긩 (1479)
  5. Ki沼긩 tra ph梳쬷 m沼걅 l횪 g챙 ? (4207)
  6. developer.* : topics of interest to QA engineers (1035)
  7. Quality Assurance Methodology (1528)
  8. [Paid] FP outline : Estimate Function Points for testing (1738)
  9. Selenium Automation Testing Framework for Functional Testing of Web Applications (11998)
  10. Quality Assurance and Validation (796)
  11. 2010-11-17, expo:QA @ Spain (1582)
  12. (QSoftVietNam) QA (1995)
  13. An Introduction into Software Testing (2185)
  14. Optimizing: Software Test Efforts: What You Need to Know (2069)
  15. Innovative Fun Company looking for QA! (1593)
  16. (VietSoftware International) SQA (1956)
  17. Reduce QA Testing Costs and Cycle Times (654)
  18. [Video] Software Testing vs. Software Quality Assurance (896)
  19. Testing 3+ years resume (2642)
  20. (TARA CORPORATION)-QA EXECUTIVE (1650)