veryard projects - innovation for demanding changesystems engineering for business process changecomponent-based development - component-based business

component testing

on this page on other pages
home
Technology change invalidates old ways of testing.

New testing challenges for software componentry

New testing challenges for software reuse

Types of component testing

ecological model of component supply

software component quality

quality assurance of distributed software components

cbb notions

cbse notions

eco notions

veryard projects
home page

cbse page

cbb page

contact us


veryard projects - innovation for demanding change

Technology Change Invalidates Old Ways of Testing 

veryard projects > cbse > component testing  > technology change


Whenever technology takes a step forward, the old ways of testing systems prove inadequate.

When I left college as a young programmer, my first project was to develop an online database system.   Most of the programmers on the team had been working on batch systems for many years; but for some of them, this was their first online system.  These programmers knew how to test batch programs, and they tested online programs with the same mindset.  As a result, they missed things.  For example, they failed to include concurrency checks in their testing - what happens when two users attempt to access the same record at the same time.  Although this was one of the most common modes of failure for early online systems, as well as a frequent source of processing bottlenecks, the testing sometimes completely overlooked it, and failed to predict performance problems.

Many years later, when client/server systems became fashionable, a similar situation arose.  Certain classes of system error were completely overlooked by system testing.  And as before, performance problems were often not identified until the system went live.

Each time technology takes a step forward, each time we start to develop new types of system, we need to rethink the way we test the new systems, and not just rely on previous best practices.  Developers and testers of open distributed component-based systems are now faced with exactly this challenge.

It is a testing time for Component-Based Development.


veryard projects - innovation for demanding change

New Testing Challenges for Software Componentry

veryard projects > cbse > component testing  > new challenges

So what are the new testing issues for developers and users of software components?

Issues for Components

How can I test that a component is suitable for my purpose?
Does it help me to know that other people have tested and used this component?
How much does a test (or certificate) tell us about the properties of a component we might wish to plug into our own systems?
How can I test the performance characteristics of an isolated component?
How can I protect myself against rogue component behaviour?
How can I limit the amount of regression testing that I need when I plug a new component into an existing system?

Issues for Component-Based Systems

How do we test large distributed component-based systems? How can I test that a complex distributed system satisfies my requirements?
What does testing achieve for open distributed systems?  How much does a test tell us about the emergent properties of a large distributed system?
What level of testing is reasonable for due diligence, before an assembly of components can be put into production?
How can I test performance characteristics, such as response time and throughput?
How can I detect unwanted interactions between components?  How can I test for feature interaction?
If I replace a single component, do I have to retest the whole system?
To what extent are large distributed systems testable at all?  If there are certain types of system failure that are too complex to test, what does that imply for systems design or management?
How should I design systems to make testing easier and more effective?

Questions for the user or purchaser.

The user or purchaser of a software component wants to know:
 
What evidence is there that this component is likely to work properly in my application?
Has the component been tested in a way that is relevant to my intended use?
How much serious usage has this component already had, in areas similar to my intended use?
What are the implications of using this component?
  • performance / capacity
  • reliability / robustness
  • maintainability / portability

What information about a component does the user or purchaser need?  What information is normally available?

Questions for the developer.

A responsible developer of a software component wants to know:
 
What evidence is there that this component is likely to work properly in real user applications?
Has the component been tested in a sufficient variety of situations?
Is the component designed for efficient performance in a reasonable range of contexts?

But what does 'responsible' mean in this context? Responsible to whom?

What information about the usage of a component does the developer need?  What information is normally available?

Information flows are needed to manage the quality of a component.

Intended and actual use should be fed back to testers as well as developers.

Errors and defects should be fed back to the developers by the users, via the brokers.  They may also be fed back to the testers (since they provide some feedback on the thoroughness and relevance of the testing).

Other users may want to be notified, not only of any outstanding software errors and description defects, but also of the pattern of error rates and defect rates, from which they may want to make judgements about expected future error and defect rates.

Testing to avoid testing

There is also the concept of testing to avoid testing. For example, what testing would I have to perform on a component architecture, to assure myself that I could sometimes substitute one component for another without regression-testing the whole thing every time.

veryard projects - innovation for demanding change

VVT Challenges for Software Reuse and Traded Components

veryard projects > cbse > component testing  > reuse

CBD and software reuse places new demands on VVT.

There are additional challenges for VVT if components are to be traded between organizations.

In an open distributed world, there is no direct contact or contract between the developer and the user, and no mechanism for establishing trust and accountability.  This undermines the traditional assumptions and methods of quality assurance.

The purpose of software testing and quality assurance is to confirm that a solution has the desired level of quality.

Software testing and quality assurance are processes of discovering the true quality of the solution. An artefact is fit-for-purpose if it manifests the required behaviour(s) in the intended context(s). A solution consists of one or many artefacts in some context. What we are really interested in is the quality of the whole solution. It doesn't make any sense to talk about the quality of a single artefact as a stand-alone thing, independent of any particular context. There is no absolute (context-free) measure of quality.

But if we want high levels of software reuse, this usually means lots of different contexts. A software component may be used for many different applications, in different business and technical environments, by different developers using different methods and tools for different users in different organizations. It is all very well using a reusable component from a catalog or library, on the assumption that someone else has rigorously tested it. But what if they haven't? And if they have, who is to say that their testing is relevant to your intended use of the component?

Only closed systems can be tested. A test is a controlled scientific experiment. A test must be repeatable. Scientists in laboratories invent controlled (repeatable) experiments that are believed to provide useful information about the uncontrolled (unrepeatable) world.

Tests of open systems are artificial or partial. In order to carry out any meaningful tests on an open system, artificial constraints must be placed on it which make it closed for the purposes of the test. This means that no uncontrolled inputs are allowed, which might distort the behaviour of the test and make it unrepeatable. Tests of components intended for open systems are artificial or partial.

Components can be tested in two ways. Either an artificial system environment is constructed for the purposes of the test (known as a test harness). Obviously such a test is artificial. Or the component is tested within one or more of the systems for which the component was designed. Such a system must be closed for the purposes of the test, which makes the test artificial. Furthermore, the component is only tested on a subset of the possible systems for which it might be used in future, which makes the test partial.
 
more Notes on Software Reuse

Software Component Quality


veryard projects - innovation for demanding change

Types of testing

veryard projects > cbse > component testing  > types of testing


This discussion is based on an ecological model of component supply.
Given an ecological model of the CBD world, two distinct forms of testing are needed. (Similar remarks apply to verification and validation). 
Intra-ecosystem
testing
Testing components and component interactions within one ecosystem. 

For example, within the service supply ecosystem, we may test that services satisfy their specifications. We can also test interactions between a bundle of services.

For example, within the device supply ecosystem, we may test conformance of components to various specifications or standards.

Inter-ecosystem
testing
Testing components and component interactions across two or more ecosystems. 

For example, testing that a device satisfactorily implements an interface.

For example, end-user acceptance testing.

Most of the available tools and techniques for testing belong to a single ecosystem.

Testing across two or more ecosystems needs a collaboration between multiple roles, where each role represents a given perspective within a given ecosystem.


Acknowledgements

Some of this material has previously appeared in other forms.

"How Business Relationship Modelling Supports Quality Assurance of Business Objects" AQuIS 96: Third International Conference on Achieving Quality in Software (IFIP WG 5.4) (Florence: January 1996)

Object Oriented Testing and Performance Metrics Anthology, edited by Dave Burleigh, to be published by Miller Freeman.

Thanks to Dip Ganguli, Richard Gilyead and Dorothy Graham for useful discussions.  Thanks also to the CBDi Forum Testing SIG.


 
top

veryard projects
home page

contact us

in asssociation with 
CBDi Forum
 
This page last updated on November 10th, 2001
Copyright © 1999-2001 Veryard Projects Ltd 
http://www.veryard.com/CBDmain/cbdtesting.htm