veryard projects - innovation for demanding change

Quality and Desire

By: Richard Veryard

Status: Incomplete Draft

Introduction

Levels of confidence in software quality vary widely, from misplaced complacency at one end of the spectrum to excessive caution at the other end of the spectrum. The purpose of quality assurance is to establish reasonable and realistic levels of confidence.

Confidence is associated with predictability, stability and trust. But technology trends, especially object reuse and open distributed processing, appear to reduce the factors that lead to confidence in software quality.

Why do we need quality assurance?

According to our experience, there is often (perhaps always) a gap between the actual quality of an entity and the quality desired for it. Sometimes we accept (tolerate, put up with) this gap, but often it is unacceptable.

quality gap

Quality Gap

In this account, desire and acceptance are subjective, in the sense that they are only meaningful with relation to particular stakeholders in particular contexts: desired where and by whom, acceptable when and to whom. There is no absolute, transcendental impersonal or neutral measure of desirability or acceptability.

The quality gap may be experienced as a whole, or analysed into its components, which may be called defects.

Quality assurance is a process of discovery

Quality assurance is defined as 'all the planned and systematic activities implemented within the quality system, and demonstrated as needed, to provide adequate confidence that an entity will fulfil requirements for quality' [ISO 8402, 1994]. In short, therefore, quality assurance provides knowledge about the quality of entities.

In common with most other attempts to gain knowledge, quality assurance cannot avoid affecting the entities about which it attempts to gain knowledge. Any attempt to discover the degree of quality of an entity may bring about an improvement in the quality of that entity. Indeed, such improvement is often seen as the primary justification of quality assurance.

Conversely, any prediction of the quality of an entity should take into account the extent of quality assurance that may be applicable. In particular, if a software developer has access to quality assurance mechanisms that make the emerging quality of a software artefact visible during the development process (other conditions being favourable), this should contribute to a high degree of quality in the developed artefact.

Thus quality assurance is a reflexive process, which risks inaccuracy if it ignores its own effects.

How do we perform Quality Assessment?

Quality assessment plays a central role in quality assurance. Quality assessment is usually divided into verification, validation and test (VVT).

In the Book of Genesis, God conducts His own quality assessment. "And God saw every thing that he had made, and, behold, it was good." [Genesis i, 31]

Indeed, when the serpent comes along later and performs an independent test, God gets pretty angry with him, and curses him something rotten. (Examples of VVT can also be found elsewhere in the Scriptures, notably in the book of Job.)

One of the principles of quality assurance is that of independent quality assessment. Although some preliminary self-assessment may be an excellent idea, complete reliance on self-assessment is unwise.

But on the other hand, there are pitfalls of disconnected quality assessment. How can anyone assess something without talking to those responsible for it? They become like archaeologists trying to decipher the processes of a long-dead tribe or long-forgotten society.

A quality review requires a dialogue between two parties: one or more independent reviewers on one side, one or more representatives of the entity being reviewed on the other side. With some forms of review or test, it is the entity itself that speaks; with other forms of review, there will be people and/or documents that speak on its behalf.

This VVT dialogue is a form of language game [Wittgenstein], with the following roles:

Entity

supported by

Human representatives of entity

Documentation

speaking The entity presents itself (or is presented) with its characteristics for assessment.

The entity performs required tasks, so that its behaviour may be assessed.

Records or memories of the entity’s history may also be presented.

Independent test and/or quality review team

supported by

Automated verification and validation of entity

listening Poses questions

Sets tests

Spots anomalies

Makes judgements

We can picture this language game as follows:

Speaking and Listening

Quality Assessment Dialogue

Note that the boxes on this diagram are process boxes; this is intended to focus attention on the activities of speaking and listening, rather than on the agents and artefacts performing the speaking and listening activities.

Quality Assessment is trapped by a wall of language

When we put this diagram together with the earlier one, we can see that there are a number of problems.

Speaking/listening across the gap

The wall of language

This conceptual structure, derived from Lacan via Boxer, allows us to make several predictions about the limitations of the quality assurance process.

  1. There is always something left uncovered, not reviewed or tested in any formal VVT process. Noone has complete knowledge of the actual characteristics of the entity. The entity always avoids revealing everything about itself, however clever and determined the VVT.
  2. There is always something left to be desired, not captured in any formal specification. Noone has complete knowledge of the requirements. There will always be some requirements left unstated.
  3. Any given VVT process will systematically focus on some classes of quality issue and overlook others. There is always something left to be desired in the VVT process itself.
  4. Nobody has a complete and utterly reliable view of what is really going on. Neither the actual quality, nor the desired quality can be completely captured in the language of VVT. Thus, rather than providing a bridge between actual and desired quality, VVT sets up a wall between them.

Historical Examples

We can see some examples of this in the software industry.

Consider early online systems. Many software testing teams continued to use techniques appropriate for batch systems. This meant that the testing process was systematically blind to certain classes of defect, notably those associated with transaction concurrency.

And now, as the software industry progresses from dumb-terminal systems on mainframes to cooperative processing systems on client/server networks, a similar situation has arisen. Although it should be obvious to anyone that the old testing tools and techniques will not adequately test the properties of client/server networks, many organizations continue to use these old tools and techniques.

Another systematic form of blindness has been in the area of the so-called non-functional requirements, sorely neglected by some proprietary methods.

To be continued

. . .

References

P. Boxer & B. Palmer (1994) ‘Meeting the Challenge of the Case’, in R. Casemore et al (eds), What makes consultancy work: Understanding the dynamics. Proceedings of the International Consulting Conference 1994. London: South Bank University Press, 1994. This paper, together with others by Philip Boxer and Barry Palmer, is available via the Boxer Research Ltd website.

J. Lacan, The Seminar of Jacques Lacan Book II: The Ego in Freud’s Theory and in the Technique of Psychoanalysis 1954-1955 (edited by Jacques-Alain Miller 1978, translated by Sylvana Tomaselli, Cambridge University Press, 1988)

R.A. Veryard & J.E. Dobson, 'Third Order Requirements Engineering: Vision and Identity', in Proceedings of REFSQ 95, Second International Workshop on Requirements Engineering, (Jyvaskyla, Finland: June 12-13, 1995)

Contact Details