The user or purchaser of a software component wants to know:
A responsible developer of a software component wants to know:
But what does 'responsible' mean in this context? Responsible to whom? In an open distributed world, there is no direct contact or contract between the developer and the user, and no mechanism for establishing trust and accountability.
The quality of the solution is defined as the totality of characteristics of the solution that bear on its ability to meet the user's stated or implied needs. A solution is a collaborating system of artefacts, put in some context for some purpose, and its quality must be assessed holistically.
It is not the job of software testing or quality assurance to attribute blame. If a software component performs well on platform XX, and performs poorly on platform YY, this is a useful piece of information concerning the total system or solution, which may be used to decide what is to be done about the component or about the platform, or both. Although in a given situation, it may be possible to force somebody to take responsibility for fixing one of the subsystems, this possibility arises from the political or commercial context, and is not driven purely by the logic of quality.
Software testing and quality assurance are processes of discovering the true quality of the solution. The process of discovering the true level of quality will often bring about improvements in quality - indeed, this is often the main reason for undertaking the process. But there is no end to the process of discovery. There is no point at which we can say: we now know everything there is to be known about the quality of this solution.
The solution consists of one or many artefacts in some context. What we are really interested in is the quality of the whole solution. It doesn't make any sense to talk about the quality of a single artefact as a stand-alone thing, independent of any particular context. There is no absolute (context-free) measure of quality.
If we want high levels of software reuse, this usually means lots of different contexts. A software component may be used for many different applications, in different business and technical environments, by different developers using different methods and tools for different users in different organizations.
It is all very well using a reusable component from a catalog or library, on the assumption that someone else has rigorously tested it. But what if they haven't? And if they have, who is to say that their testing is relevant to your intended use of the component?
An effective process of reuse relies on adequate and accurate descriptions of the objects. Indeed, where reuse crosses commercial boundaries, procurement decisions may be based solely on the description: a purchaser may only get the rest of the object after accepting certain payment obligations.
But there may be semantic subtleties or ambiguities in these descriptions. For an illustration of this, consider a business object called FIXED ASSET that implements a particular interpretation of a particular accounting convention to evaluate fixed assets. An incorrect choice of accounting convention may significantly affect the balance sheet. Therefore, a quality evaluation of a business object must address its description (or catalog entry) as well as the quality of any executable code. Quality may be improved by clarifying or extending the description. But the perfect description may never be achievable.
Various mechanisms are used in the software industry to allow the purchaser to try out a business object before committing to purchase. These mechanisms often involve giving the purchaser access to a restricted version of the object. Although these mechanisms may enable the user to verify that the object fits the user's requirements, they also introduce additional quality complications: what guarantee is there that the full version of the object and the restricted version behave identically.
The supply chain may be complex. Departmental software specialists will be using desktop tools for rapid assembly of software applications from existing components from many sources. There will be few if any software developers who develop everything from scratch; instead, even apparently original objects may contain smaller components from other sources.
In an open distributed world, all of these are problematic. Product testing is only as good as the test data; process assessment may be bureaucratic and based on irrelevant criteria; user satisfaction may be subjective. In any case, all are incomplete and inconclusive.
Only closed systems can be tested. A test is a controlled scientific experiment. A test must be repeatable. Scientists in laboratories invent controlled (repeatable) experiments that are believed to provide useful information about the uncontrolled (unrepeatable) world.
Tests of open systems are artificial or partial. In order to carry out any meaningful tests on an open system, artificial constraints must be placed on it which make it closed for the purposes of the test. This means that no uncontrolled inputs are allowed, which might distort the behaviour of the test and make it unrepeatable.
Tests of components intended for open systems are artificial or partial. Components can be tested in two ways.
The development process is itself an open system. Design and development is distributed across multiple organizations and management domains. Although process standards such as ISO 9000 insist on proper controls for externally supplied software components, in practice these controls are usually hopelessly inadequate.
Some organizations (especially Government departments and their favoured suppliers) maintain a formal procurement process, whose intention is to improve the quality (fit-for-purpose) and value-for money of the procured solutions and artefacts. Such processes attempt to reduce the uncertainty of the development process, but usually the effect is merely to displace the uncertainty and encourage game-playing by contractors. And whenever the bureaucratic controls fail to deliver quality and value-for-money - which they frequently do - the bureaucrats will try to invent more sophisticated controls.
Under some special circumstances, it is possible to carry out a completely definitive test to demonstrate that a given artefact completely satisfies a given (formal) specification.
However, this does not prove that the artefact actually meets the users stated or implied needs.
A statement of requirements is a description which an object must satisfy for its actual use, for a given purpose, in a given context. We call these the actual requirements. When developing an object for reuse, however, the developer usually does not have access to the complete set of concete requirements. Instead, the developer attempts to build reusable objects by working against an generalized statement of requirements that is hoped to cover a reasonable range of actual requirements. Carrying out QA against an generalized statement of requirements, however, begs the question: to what extent will the developer's generalized notion of the users' requirements match the users' actual requirements?
ISO 9126 defines the characteristics of software quality as follows:
A complete test of a software artefact should cover all of these characteristics.
These changes affect at least three important aspects of software quality assurance: design reviews, testing and configuration control.
The developer of a software artefact typically has incomplete knowledge of how it will be used, when, where, by whom, in what contexts, for what (business) purposes. Nobody has the complete picture: not the software tester, nor any individual end-user, nor any intermediary (planner, publisher, broker/trader, librarian, …).
Although often there is no fixed specification of what the software artefact is required to do, some limited testing is possible, for example:
Where testing is more difficult, and its results less conclusive, is in the compatibility of multiple objects and artefacts. This is complicated by the fact that we don't always know what compatibility is required. This is related to the problem of feature interaction.
It is left as an exercise for the reader to prove that quality assurance is doomed to fail, that the software component can never be definitively certified, that there is always more work to do.
This document is partially based on a paper for AQuIS 96: Third International Conference on Achieving Quality in Software (IFIP WG 5.4) (Florence: January 1996).
Please send feedback to the author. For related material, please visit the veryard projects home page.
This page last updated on February 20th, 1997
using Netscape Navigator Gold.
Copyright © 1996, 1997 Richard Veryard