![]() |
Quality Assurance of Distributed Software Components |
Levels of confidence in software quality vary widely, from misplaced complacency at one end of the spectrum to excessive caution at the other end of the spectrum. The purpose of quality assurance is to establish reasonable and realistic levels of confidence. Confidence is associated with predictability, stability and trust. But technology trends, especially object reuse and open distributed processing, appear to reduce the factors that lead to confidence in software quality. This is the challenge addressed by this paper.
If software users want to take advantage of unknown objects from anonymous sources, how confident can they reasonably be that these objects are fit-for-purpose? What assumptions can the users make about such objects? Frakes and Fox (1995) argue that quality concerns do not currently inhibit reuse, but they indicate that this situation may change. Meanwhile, what quality checks can and should a developer perform, before submitting an object for publication and dissemination, which may result in the object's being used in unanticipated contexts and for unanticipated purposes. What kind of systems and relationships are necessary, to attain the maximum reasonable level of confidence in software quality, without unduly restricting technological progress?
Table 1 Stakeholder concerns motivating quality assurance
User concerns
|
Developer concerns
|
This paper argues that full responsibility for quality can be taken neither by the developer nor by the user, nor by any intermediate party (such as a broker or software publisher). Thus effective quality assurance needs to focus on the roles and responsibilities and relationships between the various stakeholders in the object delivery chain.
The paper is in two parts. The first part states the problem: it describes quality assurance as a way of acquiring reliable knowlege about the quality of objects, and indicates how reuse cuts across our traditional ways of knowing. The second part indicates the solution: the application of responsibility modelling techniques to determine structures for the sharing of responsibilities across the object delivery chain.
The paper is derived from, and is intended to demonstrate the connections between, Texas Instruments' recent work in three separate areas:
In common with most other attempts to gain knowledge, quality assurance cannot avoid affecting the entities about which it attempts to gain knowledge. Any attempt to discover the degree of quality of an entity may bring about an improvement in the quality of that entity. Indeed, such improvement is often seen as the primary justification of quality assurance.
Conversely, any prediction of the quality of an entity should take into account the extent of quality assurance that may be applicable. In particular, if a software developer has access to quality assurance mechanisms that make the emerging quality of a software artefact visible during the development process (other conditions being favourable), this should reduce the occurrence of defects in the developed artefact.
Quality assurance can focus on three areas:
Although some software product standards exist, and further standards are being developed, these only address a limited subset of the desired quality characteristics.Quality is commonly (and properly) defined in fit-for-purpose terms. The official ISO definition (ISO 8402, 1994) defines quality as "The totality of characteristics of an entity that bear on its ability to satisfy stated or implied needs." (This is not a closed definition: it allows for some uncertainty about when the needs are stated, and what they may be implied by.) To be valid and useful, quality assurance must find some way of addressing the stated and implied needs. Any attempt to construct a 'pure' quality assurance, independent of user context or purpose, would necessarily fall short.
Similar arguments apply to software quality metrics. Although generic software quality measurements have been attempted, often copied from the equivalent hardware quality measurements (e.g. Mean-Time-Between-Failures, Mean-Time-To-Fail), software quality of service is often better expressed in application-dependent terms. For example, mission time, which is defined as the time during which a component is to stay operational with a precisely defined probability of failure (Siewiorek & Swarz, 1992)
Within the traditional software development paradigm (waterfall development, central design authority, fixed requirements, or what has been ironically called the 'fixed point theorem' (Paul 1993)), the evident ambiguities of the ISO 8402 definition of quality are usually glossed over. Quality assurance is carried out against a fixed set of requirements of a fixed user or user group.
ISO 9126 offers a standard framework for evaluating the quality of software products (ISO 9126, 1991). Of the six standard characteristics, four are defined in terms of stated requirements (reliability, efficiency, maintainability and portability), while the other two are defined in terms of stated or implied requirements (functionality and usability). Reusability does not appear explicitly in the ISO 9126 model, but it can probably be regarded as a special type of usability, namely usability-by-developers.
A statement of requirements is a description which an object must satisfy for its actual use, for a given purpose, in a given context. We call these the actual requirements. When developing an object for reuse, however, the developer usually does not have access to the complete set of concete requirements. Instead, the developer attempts to build reusable objects by working against an generalized statement of requirements that is hoped to cover a reasonable range of actual requirements. Carrying out QA against an generalized statement of requirements, however, begs the question: to what extent will the developer's generalized notion of the users' requirements match the users' actual requirements?
Thus the problem with restricting our notions of quality (and therefore our notions of quality assurance) to the formally stated requirements is twofold:
As requirements engineering and information systems engineering become increasingly dynamic and decentralized, quality assurance itself needs to become equally dynamic, equally decentralized. The acquisition of knowledge about quality becomes more problematic than ever: when (if ever) and from what standpoint (if any) can quality be assured? To examine these issues in more detail, let us look at two major themes in the breakdown of the traditional software development paradigm:
The Object Management Group special interest group on Business Object Management (BOMSIG) has developed the following working definition of 'business object':
A business object might be something internal to the enterprise, such as an Employee or a Department, or something external to the enterprise such as a Supplier or Tax Demand. Business objects may be conceptual, in which case we may think of them as business or design patterns (Alexander et al 1977), but they may also be implemented in executable software. In any case, the dividing line between conceptual objects and executable software is tenuous (Negishi 1985, Locksley 1991).The potential benefits of software reuse are certainly considerable, not merely for enhanced productivity but also for enhanced quality. Much has been said (in passing) about the impact of object-oriented techniques on quality:
However, although the claimed productivity benefits of reuse have been extensively analysed (Veryard 91), little empirical evidence is available to support the claimed quality benefits.The attainment of these benefits is not just a technical matter. Benefits may be lost by poor project management, or by lack of engineering discipline. This is where quality management comes in. It provides a systematic framework underpinning good management and engineering practices, thereby providing reasonable assurance to all stakeholders (including end-users and senior management) that the potential benefits of a given technology will indeed be forthcoming.
For example, it is all very well using a reusable component from a catalog or library, on the assumption that someone else has rigorously tested it. But what if they haven't? And if they have, who is to say that their testing is relevant to your intended use of the component?
An effective process of reuse relies on adequate and accurate descriptions of the objects. Indeed, where reuse crosses commercial boundaries, procurement decisions may be based solely on the description: a purchaser may only get the rest of the object after accepting certain payment obligations.
But there may be semantic subtleties or ambiguities in these descriptions. For an illustration of this, consider a business object called FIXED ASSET that implements a particular interpretation of a particular accounting convention to evaluate fixed assets. An incorrect choice of accounting convention may significantly affect the balance sheet. Therefore, a quality evaluation of a business object must address its description (or catalog entry) as well as the quality of any executable code. Quality may be improved by clarifying or extending the description. But the perfect description may never be achievable.
Various mechanisms are used in the software industry to allow the purchaser to try out a business object before committing to purchase. These mechanisms often involve giving the purchaser access to a restricted version of the object. Although these mechanisms may enable the user to verify that the object fits the user's requirements, they also introduce additional quality complications: what guarantee is there that the full version of the object and the restricted version behave identically.
The supply chain may be complex. Departmental software specialists will be using desktop tools for rapid assembly of software applications from existing components from many sources. There will be few if any software developers who develop everything from scratch; instead, even apparently original objects may contain smaller components from other sources.
Table 2 Key features of Open Distributed Processing
![]() |
Federation | The lack of central authority over software design or configuration. |
![]() |
Interoperability | The ability to link and reconfigure systems and services. |
![]() |
Heterogeneity | The ability to link across different platforms and protocols. |
![]() |
Transparency | The ability to hide complications from users. |
![]() |
Trading / Broking | The presence of intermediary agents, to promote and distribute software artefacts and services. |
The relevance of ODP to this paper is that ODP further undermines the assumptions on which traditional software quality assurance is based. End-to-end knowledge and authority cannot be guaranteed; information flows must be negotiated across organizational boundaries.
The developer of a software artefact typically has incomplete knowledge of how it will be used, when, where, by whom, in what contexts, for what (business) purposes. Nobody has the complete picture: not the software tester, nor any individual end-user, nor any intermediary (planner, publisher, broker/trader, librarian, …).
Although often there is no fixed specification of what the software artefact is required to do, some limited testing is possible, for example:
Where testing is more difficult, and its results less conclusive, is in the compatibility of multiple objects and artefacts. This is complicated by the fact that we don't always know what compatibility is required. This is related to the problem of feature interaction.In general, there are two approaches to organizing collective responsibilities.
A homogeneous closed group is a stable network based on informal commitments (trust).
In some situations, responsibility can be shared by creating a stable group or team.
The members of the group identify with the group. This has two aspects:
A heterogeneous open group is a dynamic network based on formal commitments (contract)In situations where stable teams cannot develop (or be built), formal structures of individual duties need to be established, to enable responsibilities to be shared.
This tends to be necessary under the following conditions (Elster 1978, pp 134 ff):
Because of its tilt towards heterogeneity, open distributed processing forces us to consider the latter approach.We define responsibility as a three-part relationship, involving two agents and a state. A state is typically a property of an entity; remember that an entity may be an activity or a process, a product, an organization, a system or a person, or any combination thereof.
Example: The software developer is responsible to the software publisher for the thoroughness of testing.
In this example, the software developer and the software publisher are the two agents, and thoroughness of testing is the state, which can be decomposed into a property (thoroughness) and an entity (the testing activity).
One of the reasons we are interested in the thoroughness of testing is that it is a way of gaining confidence in the robustness of the tested object(s). There are thus dependencies between states (robustness is dependent upon thoroughness), which can be represented as dependency hierarchies. The robustness of the tested object(s) is thus a state at the next level in the hierarchy.
Responsibilities can be delegated, but not escaped. Thus if the software developer delegates the responsibility for the thoroughness of testing to a third party, the software developer remains answerable to the software publisher. Meanwhile the software publisher is (let us suppose) responsible to the software user for the robustness of the tested object, which (as we have seen) is dependent upon the thoroughness of testing.
Current thinking in both ODP and quality management agrees on the fact that delegation should be (formally) 'transparent', in other words the original source(s) should be invisible to the user or customer. If I purchase an object or service from a software publisher, I don't want to have to chase up the supply chain to get satisfaction. (In practice, of course, the procurement organization does want to know about the original source, doesn't want complete transparency of supply, but this may be a consequence of incomplete trust in the vendor.)
Responsibility modelling is a technique for mapping out these delegation structures for specific responsibilities across multiple agents. Delegation structures may be informal or formal, hierarchical or contractual. Delegation structures should be reflected in contracts and agreements between agents.
Delegation structures should also be reflected in information systems. According to good delegation practice, if the software developer delegates the responsibility for the thoroughness of testing to a third party, the software developer must have some mechanism for monitoring and assessing the third party. (This is of course demanded by ISO 9000.) This means that there must be an information system (not necessarily computerized) that enables the software developer to continue to provide the necessary assurance to the software librarian, and means that the delegation can be (in principle) transparent to the users.
Previous techniques for responsibility modelling, such as RAEW analysis (Crane 1986, Texas Instruments 1990), have regarded responsibility as a two-place relation between an agent (or role) and an activity. This has proved extremely useful for identifying inconsistencies between responsibility structures and authority structures, or between responsibility structures and the distribution of knowledge. What the present approach offers in addition is the ability to model delegation structures, especially where these cross organizational boundaries.
There are many approaches to the formal division of responsibilities for quality. We shall examine three:
i Allocation of characteristics
ii Counter argument
iii Collective feedback
A statement such as 'this artefact has such-and-such properties' can never be guaranteed 100%. However, the responsible agent provides a mechanism, accessible to the second agent, for corrective action.
For example, separating responsibility for the efficiency of an object from the responsibility for its functionality. Such separation of concerns will typically be based on the ISO 9126 standard.
This may be of some limited use, but fails to address the holistic aspect of quality explicitly addressed in the ISO 8402 definition.
Figure 2
Feedback chains.
Intended and actual use should be fed back to testers as well as developers. Errors and defects should be fed back to the developers by the users, via intermediaries with responsibility for software distribution, such as software publishers or software brokers. They may also be fed back to the testers (since they provide some feedback on the thoroughness and relevance of the testing). Other users may want to be notified, not only of any outstanding software errors and description defects, but also of the pattern of error rates and defect rates, from which they may want to make judgements about expected future error and defect rates.
The feedback chains described in the previous section usually need to work across organizational boundaries. This is likely to raise additional organizational issues. Business relationship modelling may also indicate the need for additional agents, such as independent (third party) assessors and advisors, as well as market regulators.
Detailed description of the use of responsibility modelling to support such structuring judgements can be found in the reports of the Enterprise Computing Project (Veryard 1995)
The developer or vendor may offer specific warranties, or may be subject to various liabilities, either through formal contract or through common practice. These legal aspects of the relationships need to be considered carefully.
The necessity of this approach has been argued on the basis of the impossibility of predicting all the uses and use-contexts of a reusable object, and the inappropriateness of unduly restricting such uses and use-contexts. This largely invalidates traditional quality assurance mechanisms.
Further development in this area may benefit from the production of support tools, as well as standards for responsibility distribution.
APM (1991) ANSA: A Systems Designer's Introduction to the Architecture. APM Ltd., Cambridge UK: April 1991.
Roger Crane (1986) The Four Organizations of Lord Brown and R.A.E.W.. Doctoral Thesis, Kennedy-Western University.
John Dobson and Ros Strens (1994) Responsibility modelling as a technique for requirements definition. Intelligent Systems Engineering3 (1) pp 20-26.
John Dodd (1995) Component-Based Development: Principles. Texas Instruments Methods Guide: Issue 1.0, March 1995.
Jon Elster (1978) Logic and Society: Contradictions and Possible Worlds John Wiley & Sons, Chichester UK.
W.B. Frakes and C.J. Fox (1995) Sixteen Questions about Software Reuse. Communications of the ACM 38 (6), pp 75-87.
ISO 8402 (1994) Quality Management and Quality Assurance Vocabulary International Standards Organization, Geneva.
ISO 9000 (1994) Quality Management and Quality Assurance Standards International Standards Organization, Geneva.
ISO 9126 (1991) Information Technology - Software Product Evaluation - Quality Characteristics and Guidelines for their Use International Standards Organization, Geneva.
ISO 10746 (1995) Basic Reference Model for Open Distributed Processing International Standards Organization, Geneva.
Rob van der Linden, Richard Veryard, Ian Macdonald and John Dobson (1995) Market Report Enterprise Computing Project.
Gareth Locksley (1991) Mapping strategies for software business, in (Veryard 1991) pp 16-30.
H. Negishi (1985) Tentative Classification of Global Software Behav. Inf. Technol 4 (2), pp 163-170.
OMG (1992) Object Management Architecture Guide Revision 2, Second Edition, Object Management Group, September 1992.
OMG (1994) Minutes of BOMSIG meeting, Object Management Group, April 7, 1994.
OSF (1991) DCE User Guide and Reference Open Software Foundation, Cambridge MA.
Ray Paul (1993) Dead Paradigms for Living Systems. Paper presented at the First European Conference on Information Systems, Henley, 29-30 March 1993.
D.P. Siewiorek and R.S. Swarz (1992) Reliable Computer Systems Digital Press.
Texas Instruments (1990) A Guide to Information Engineering using the IEF™ Texas Instruments Inc., Plano TX, Second Edition.
Texas Instruments (1991) Strategy Announcement: Arriba! Project. Version 1.0, TI Software Business, Advanced Technology Marketing, January 5th, 1995.
Richard Veryard (1991) (ed) The Economics of Information Systems and Software. Butterworth-Heinemann, Oxford.
Richard Veryard (1994) Information Coordination: The Management of Information Models, Systems and Organizations. Prentice Hall, Hemel Hempstead UK.
Richard Veryard and Ian Macdonald (1994) EMM/ODP: A methodology for federated and distributed systems, in Methods and associated Tools for the Information Systems Life Cycle (ed. A.A. Verrijn-Stuart and T.W. Olle), IFIP Transactions, Elsevier/North-Holland, Amsterdam.
Richard Veryard, Ian Macdonald, Rob van der Linden and John Dobson (1995) Enterprise Modelling Methodology. Enterprise Computing Project.
© Richard Veryard 1995, 1997