Testing, testing everywhere
Over the twenty years or so I've been developing software under
UNIX©, inevitably much of that time has involved test software -
covering applications, operating systems, protocols and more. This has
involved both developing (and testing!) test software, and using it in
a consultancy environment to produce results for technical or business
customers.
I am currently working as a testing consultant at Acutest, which works in
the areas of test strategy, test management, and supplies specialist
testing services. I work mainly on non-functional testing which
includes software
performance testing and business
continuity testing.
Non-functional testing
I thought it would make sense to outline a testing framework to
describe what I've been doing in these areas. This embraces all the
components which are typically involved in non-functional testing
(performance testing, stress testing, and so on). Although the
terminology is my own, this framework
aspires to be consistent with that provided by the most popular
off-the-shelf testing tools- for example, those from Mercury
and Compuware.
User view
Under this heading I'd include developing and using test software which
emulates the function of the user - be that a human user or client
software. Much of my early work in this area over the years has been
adapting
existing functional test suites to apply controlled and variable
performance loads to the application being tested. For example, I was
involved in performance testing of an event notification interface and
the remote client services within Reuters Data Warehouse, using
statistical data collected by the test clients.
More recently I have been using
Mercury
LoadRunner and Compuware's QALoad
- popular test tools which allow load tests, simulating large numbers
of users, to be planned and run. They work by recording the traffic of
a single user session. This is done using proxies to intercept the
traffic of one or more protocols (for example, HTTP or Citrix ICA). A
test script is automatically created often using C or C++ and with the
option to see a tree view via a GUI. The script will almost always need
modification to allow it to be replayed multiple times to simulate a
large number of users. Tools of various levels of sophistication are
provided for analysis of the results of a test run, which can involve
statistics collected from web servers, databases, and other
applications as well as server operating systems and networks.
There are also a family of less sophisticated tools which provide more
than enough features for basic performance testing but with less
analysis and integration with other tools and data sources. One I have
been particularly involved with is forecastweb
from Facilita. There are lists of alternative tools available on the
Internet. For example see performance
testing tools.
Application view
Diagnostics are often built into applications to correlate
performance results with application configuration. In one case the
application I analysed was the UNIX© System V kernel, instrumented
using the standard prof analyser. At other times I've been involved in
instrumenting applications to collect statistics on system resource
usage, in procinfo structures.
Mercury
Diagnostics and Monitoring offers ready made diagnostics to be
applied to J2EE and a range of other applications. Compuware
DevPartner provides development, debugging and tuning for Java,
.NET, web-enabled and distributed applications.
Operating system view
Monitoring UNIX system activity (for example, using sar, ipcs, netstat
and
vmstat) during tests has often provided a way to tune the operating
system to best use available resources, and avoid bottlenecks - and so
guarantee predictable
application performance.
Mercury
SiteScope provides a packaged solution here and the Compuware
equivalent is the Vantage
suite.
Modelling view
Constructing (and calibrating) a model of a software application can
provide a tool for use in planning resource requirements ahead of
time. Some application vendors provide a model of the characteristics
of their application for both tuning and capacity planning. For
example, Portal Systems used a model
of their Infranet billing system to calculate the theoretical optimum
configuration, as part of their standard performance tuning exercise.
In the distant past I spent several years developing discrete event
models - these are the basis for modelling tools such as Mercury
Capacity Planning.
Functional testing
A number of projects have involved developing tools to
test that software meets its specification, in many cases managing
projects to
use these tools in subequent phases of the software lifecycle.
These include:
- Led consultancy project to evaluate a test suite for
the X Window System for MIT.
- 5 years spent developing major parts of conformance
test suites for X/Open (now the Open
Group), MIT and IBM. These included the first version of VSX4
(I wrote the tests for fork() and exec() by the way!), the first
version of VSW
and the MIT X
Test Suite around which VSW was based.
- Developing a test suite for an ISAM database package.
- Developing test suites for components of Reuters Data
Warehouse, including the application monitoring system, and remote
client services software.
- Managing all stages of the lifecycle required to
develop web service components used in Reuters Commerce and CRM system.
Testing involved various tools, including XMLSpy for web services testing and Mercury
TestDirector for test management.
|