veryard projects - innovation for demanding change


Software Component Quality, by Richard Veryard


Component-based development and software reuse places new demands on software testing and quality assurance. This is especially true if components are to be traded between organizations.

The user or purchaser of a software component wants to know:

A responsible developer of a software component wants to know:

But what does 'responsible' mean in this context? Responsible to whom? In an open distributed world, there is no direct contact or contract between the developer and the user, and no mechanism for establishing trust and accountability.


The purpose of software testing and quality assurance is to confirm that a solution has the desired level of quality.

The quality of the solution is defined as the totality of characteristics of the solution that bear on its ability to meet the user's stated or implied needs. A solution is a collaborating system of artefacts, put in some context for some purpose, and its quality must be assessed holistically.

It is not the job of software testing or quality assurance to attribute blame. If a software component performs well on platform XX, and performs poorly on platform YY, this is a useful piece of information concerning the total system or solution, which may be used to decide what is to be done about the component or about the platform, or both. Although in a given situation, it may be possible to force somebody to take responsibility for fixing one of the subsystems, this possibility arises from the political or commercial context, and is not driven purely by the logic of quality.

Software testing and quality assurance are processes of discovering the true quality of the solution. The process of discovering the true level of quality will often bring about improvements in quality - indeed, this is often the main reason for undertaking the process. But there is no end to the process of discovery. There is no point at which we can say: we now know everything there is to be known about the quality of this solution.


An artefact is fit-for-purpose if it manifests the required behaviour(s) in the intended context(s).

The solution consists of one or many artefacts in some context. What we are really interested in is the quality of the whole solution. It doesn't make any sense to talk about the quality of a single artefact as a stand-alone thing, independent of any particular context. There is no absolute (context-free) measure of quality.

If we want high levels of software reuse, this usually means lots of different contexts. A software component may be used for many different applications, in different business and technical environments, by different developers using different methods and tools for different users in different organizations.


Reuse depends on quality

It is all very well using a reusable component from a catalog or library, on the assumption that someone else has rigorously tested it. But what if they haven't? And if they have, who is to say that their testing is relevant to your intended use of the component?

An effective process of reuse relies on adequate and accurate descriptions of the objects. Indeed, where reuse crosses commercial boundaries, procurement decisions may be based solely on the description: a purchaser may only get the rest of the object after accepting certain payment obligations.

But there may be semantic subtleties or ambiguities in these descriptions. For an illustration of this, consider a business object called FIXED ASSET that implements a particular interpretation of a particular accounting convention to evaluate fixed assets. An incorrect choice of accounting convention may significantly affect the balance sheet. Therefore, a quality evaluation of a business object must address its description (or catalog entry) as well as the quality of any executable code. Quality may be improved by clarifying or extending the description. But the perfect description may never be achievable.

Various mechanisms are used in the software industry to allow the purchaser to try out a business object before committing to purchase. These mechanisms often involve giving the purchaser access to a restricted version of the object. Although these mechanisms may enable the user to verify that the object fits the user's requirements, they also introduce additional quality complications: what guarantee is there that the full version of the object and the restricted version behave identically.

The supply chain may be complex. Departmental software specialists will be using desktop tools for rapid assembly of software applications from existing components from many sources. There will be few if any software developers who develop everything from scratch; instead, even apparently original objects may contain smaller components from other sources.


There are three approaches to quality assurance:

    1. Product certification. An independent party conducts a limited exercise in verification, validation and/or test of the software artefact.
    2. Process audit. An independent party conducts an assessment of the development process used to design, build and deliver the software artefact.
    3. User satisfaction. Analysis of the actual behaviour of the software artefact in use.

In an open distributed world, all of these are problematic. Product testing is only as good as the test data; process assessment may be bureaucratic and based on irrelevant criteria; user satisfaction may be subjective. In any case, all are incomplete and inconclusive.


Only closed systems can be tested. A test is a controlled scientific experiment. A test must be repeatable. Scientists in laboratories invent controlled (repeatable) experiments that are believed to provide useful information about the uncontrolled (unrepeatable) world.

Tests of open systems are artificial or partial. In order to carry out any meaningful tests on an open system, artificial constraints must be placed on it which make it closed for the purposes of the test. This means that no uncontrolled inputs are allowed, which might distort the behaviour of the test and make it unrepeatable.

Tests of components intended for open systems are artificial or partial. Components can be tested in two ways.

  1. Either an artificial system environment is constructed for the purposes of the test (known as a test harness). Obviously such a test is artificial.
  2. Or the component is tested within one or more of the systems for which the component was designed. Such a system must be closed for the purposes of the test, which makes the test artificial. Furthermore, the component is only tested on a subset of the possible systems for which it might be used in future, which makes the test partial.


The development process is itself an open system. Design and development is distributed across multiple organizations and management domains. Although process standards such as ISO 9000 insist on proper controls for externally supplied software components, in practice these controls are usually hopelessly inadequate.

Some organizations (especially Government departments and their favoured suppliers) maintain a formal procurement process, whose intention is to improve the quality (fit-for-purpose) and value-for money of the procured solutions and artefacts. Such processes attempt to reduce the uncertainty of the development process, but usually the effect is merely to displace the uncertainty and encourage game-playing by contractors. And whenever the bureaucratic controls fail to deliver quality and value-for-money - which they frequently do - the bureaucrats will try to invent more sophisticated controls.


We can test an artefact against its specification, but not against its requirements

Under some special circumstances, it is possible to carry out a completely definitive test to demonstrate that a given artefact completely satisfies a given (formal) specification.

However, this does not prove that the artefact actually meets the users stated or implied needs.

A statement of requirements is a description which an object must satisfy for its actual use, for a given purpose, in a given context. We call these the actual requirements. When developing an object for reuse, however, the developer usually does not have access to the complete set of concete requirements. Instead, the developer attempts to build reusable objects by working against an generalized statement of requirements that is hoped to cover a reasonable range of actual requirements. Carrying out QA against an generalized statement of requirements, however, begs the question: to what extent will the developer's generalized notion of the users' requirements match the users' actual requirements?


Software artefacts have six main characteristics to be tested.

ISO 9126 defines the characteristics of software quality as follows:

A complete test of a software artefact should cover all of these characteristics.


Open distributed processing changes the software process

These changes affect at least three important aspects of software quality assurance: design reviews, testing and configuration control.

The developer of a software artefact typically has incomplete knowledge of how it will be used, when, where, by whom, in what contexts, for what (business) purposes. Nobody has the complete picture: not the software tester, nor any individual end-user, nor any intermediary (planner, publisher, broker/trader, librarian, …).

Although often there is no fixed specification of what the software artefact is required to do, some limited testing is possible, for example:

Where testing is more difficult, and its results less conclusive, is in the compatibility of multiple objects and artefacts. This is complicated by the fact that we don't always know what compatibility is required. This is related to the problem of feature interaction.


Quality Assurance is impossible

It is left as an exercise for the reader to prove that quality assurance is doomed to fail, that the software component can never be definitively certified, that there is always more work to do.


This document is partially based on a paper for AQuIS 96: Third International Conference on Achieving Quality in Software (IFIP WG 5.4) (Florence: January 1996).

Please send feedback to the author. For related material, please visit the veryard projects home page.


This page last updated on February 20th, 1997
using Netscape Navigator Gold.

Copyright © 1996, 1997 Richard Veryard