![]() |
Six Sigmaveryard projects > systems thinking > six sigma |
about six sigma | veryard projects opinion | on this page | elsewhere | ||||
Six Sigma is a statistical
measure of the reliability of a process. By extension, the term is
also used to refer to a statistical approach to process control and improvement.
> more
Six Sigma can be used when a process is repeated millions of times within a manageable period. In an appropriate context, it can be extremely powerful and effective. However, it is widely misunderstood and often misused. > more
Six Sigma has been widely used in high-volume manufacturing situations, including computer hardware, and has delivered notable quality successes for such companies as Motorola and Texas Instruments. However, its potential value for computer software is more difficult to realise. > more
|
Six
sigma can be a useful tool for process improvement. However, it should
not be used indiscriminately. There are many situations where six sigma
measurement is silly or wrong, and may distract managers from more meaningful
metrics.
We strongly recommend
that the six sigma technique be used primarily for failure metrics that
can be directly related to customer (or end-user) satisfaction, or to high-level
safety and security requirements.
|
> what
is six sigma?
> what does six sigma measure? |
Software quality management |
![]() |
Six Sigma - An Introductionveryard projects > software quality > six sigma > introduction |
Please accept our apologies for the poor quality of the embedded GIF images.
![]() |
Six Sigma - Statisticsveryard projects > software quality > six sigma > statistics |
The most quality conscious companies, notably Motorola, are now setting this level of achievement as their goal. It derives from a level of performance that is managed in relation to plus or minus six standard deviations about the mean. This is known as "Six Sigma."
Six sigma is a mechanism for attacking variation in product and the process that leads to it. How much variation is acceptable? Is a one per cent error rate acceptable?
Figure 1: Failure Rates in Various Industries
Many large airports have 200 flights landing each day. A one per cent error rate in landing means that every day two planes will miss the runway. This is clearly unacceptable. If, reluctantly, we accept that two misses in eight years is in some way unavoidable then we are accepting an error rate of 1 in 292,000 (3.4 errors per million). This is the six sigma level.
With model-based development, a medium-sized model could easily have around 292,000 objects. So we might extrapolate and suggest that six sigma performance demands that there be no more than one error in every medium-sized project. As Motorola points out, a six sigma program is a major step towards defect-free operation.
Figure 2: Six Sigma
Six sigma is based on statistical measures in which sigma is one standard deviation about the mean. Six sigma as defined by Motorola, however, is not a simple matter of managing within plus or minus six standard deviations. Taken literally, that level of control would allow for error rates of only 0.002 per million. Instead, the six sigma approach accepts that the mean is not fixed but can drift up and down. It therefore allows a plus or minus 1.5 sigma shift in the mean as the drift within its span of control.
Six sigma is therefore concerned with managing both the upper and lower limits of specification and the drift in the mean.
The principle is illustrated in figure 2 where the normal distribution
curve is shown along with two others to indicate the point to which a 1.5
sigma shift in the mean takes the curve.
![]() |
Six Sigma measures reliabilityveryard projects > software quality > six sigma > reliability |
Reliability can be roughly expressed as an absence of failure.
We need to distinguish between the concepts of fault and failure. A failure is an event where a system departs from requirements or expectations (predictions). A fault is a defect that may cause failures. The failure is therefore a symptom that there is a fault somewhere. Note that the fault may be in the software, in the operating instructions, or somewhere else. Thus a design defect may be a fault.
Failures are not always noticed by the users, and not always reported even when noticed. We therefore also need to distinguish failures from reported failures.
The relationships between the three concepts are shown in the following diagram.
Figure 3. Fault / Failure Relationships
A fault may exist for a long time without causing a reported failure (either because the right combination of inputs never occurs, or because nobody notices or cares). Another fault may cause thousands of different failures, and it may take some time for the software engineers to demonstrate that all these failures are due to a single fault.
A failure may be detected by special monitoring software. Some systems may be designed to be self-monitoring. However, such automatic monitoring is only likely to pick up certain classes of failure.
Sometimes it may take several faults acting together to cause a failure. Performance failures may result from the accumulation of many small faults.
To sum up: You can count either faults or failures, but don’t mix them up.
Failure metrics are preferred over fault metrics for one simple reason:
they tend to be much easier to relate to customer satisfaction, whereas
fault metrics tend to be internal engineering-focused.
![]() |
Notes on Failure |
![]() |
Pitfalls of six sigmaveryard projects > software quality > six sigma > pitfalls |
Defining metrics from producer’s perspective | A common failing for engineers is to define quality metrics that cannot be related to customer satisfaction. This is particularly the case with fault metrics. |
Concentrating on the product, not the process | The results of the inspection of
a product (such as a complex piece of software) may sometimes be expressed
in six sigma terms.
This means that the inspectors are counting not failures (of the process) but faults (in the product). Implicitly, of course, they may be counting failures in the production process. But this approach may be of limited value in quality improvement, because the process errors are aggregated, and therefore difficult to trace. |
Unreliable testing process | There are two ways to get a good
score on a six sigma measurement of your manufacturing process. One is
to have an excellent manufacturing process. The other is to have an inadequate
testing process.
(ISO 9001 addresses this pitfall explicitly. Clause 4.11 demands that test processes be calibrated.) |
Insufficient volumes for meaningful statistics | If you only make a few hundred deliveries
a year, it will take thousands of years to demonstrate conformity to six
sigma standards (although it may take rather less time to demonstrate non-conformity).
For such situations, six sigma measures may be meaningless.
Note that a single software model, with half a million objects and no known defects, is not large enough to demonstrate six sigma quality. You would need a series of such models before you could claim six sigma quality. |
Measuring unimportant things | One way of getting enough things
for a statistically significant sample is to decompose the work into very
small items.
Consider an organization producing documentation. They may produce dozens of documents per year, containing thousands of pages and millions of words. To get statistically significant error rates, it may be necessary to count the number of incorrect words. The trouble with measuring quality at this minute level of granularity is that they may miss the wood for the trees. All the words may be correct, but the document as a whole may not be fit for purpose. |
![]() |
General pitfalls of software metrics |
![]() |
Six sigma - a true storyveryard projects > software quality > six sigma > true story |
As a quality tool, six sigma is hungry for large volumes of data, counting in millions. Fine if you have high unit production, or a highly repeatable process. But many departments did not.operate at these volumes. This meant looking for data to feed the tool - often with little or no relevance to customer satisfaction or value - with a distorting effect on quality management.
For example, a group of technical writers was reduced to counting spelling mistakes in large documents.
If you are a technical writer, or an occasional user of documentation,
and you cannot think of any quality metric which have greater significence
or value than the number of spelling miskates, you definately need some
consultancy. Click
here, urgintely.
![]() |
Use of six sigma for softwareveryard projects > software quality > six sigma > software |
• There are some specific processes that need to be carried out at a very low level of granularity, such as the definition of certain classes of objects in a respository or component catalogue. The correctness of these objects (against some objective criteria to be determined) may provide a measure of the reliability of the associated analysis and design processes.
• The reliability of the testing process may be measured in terms of the correct application of a large volume of test cases.
• The reliability of such processes as configuration management and version control can be measured by the frequency of objects going astray, or the wrong versions being installed.
![]() |
Six Sigma - Resources and Linksveryard projects > software quality > six sigma > resources |
![]() |
i-six-sigma |
top | ![]() |
This page last updated on September 16th, 2002 Copyright © 1994, 2002 Veryard Projects Ltd http://www.veryard.com/sqm/sixsigma.htm |