Deriving Software Test Environments From
Architecture Styles
James Cusick
Software Practices & Technology Division
AT&T Labs, Shannon Laboratory
Bldg. 104, Room 2C37, 180 Park Avenue
Florham Park, NJ 07938
james.cusick@att.com
ABSTRACT
This paper discusses the relationship between Architecture Styles and Software
Test Environments (STEs). This relationship is based on the characteristics of
the software architecture style being tested and the characteristics thereby
required of the STE. It is proposed that the development of an STE can be
facilitated through the mapping of an application’s architecture to a supporting
test architecture. To do this a reference mask is created for both the application
architecture and the test architecture. Using this construct, test environments
closely fitted to the target application architecture can be rapidly defined. Doing
so can increase reusability of test artifacts and environments as well as increase
the clarity with which the STE addresses the verification needs of the application
under test. Finally, a reference STE is proposed and demonstrated in
conjunction with the architectural reference mask to derive the appropriate test
architecture.
1. INTRODUCTION
In constructing a given Software Test
Environment (STE) the ultimate measure of
success is how well it supports the test
requirements of the software under test. To
achieve the highest level of test support for
a given application the test engineer must
understand both the application in question
and the tools, techniques, and resources
available for deployment in the test process.
Typically these two task regions,
understanding application architectures and
developing test environments, are not
always well integrated. In response a model
was constructed to link these two regions
which are discussed in this paper.
First, this paper considers the sources which
generated
this
concept.
Software
architecture styles and patterns are
introduced and then their relationship to
STEs is described. Then a mechanism to
bridge application architectures with test
environments is propsed. This mechanism is
a reference mask that can drive the
definition of a specific test environment
from a broad test architecture class. This
approach can increase reusability of test
artifacts and can speed up test environment
design. This technique also builds on the
use of a test pattern language to organize a
collection of test environments and
approaches organized by application type.
2. SOFTWARE ARCHITECTURE AND STEs
Software Architecture can be viewed as the
components of a software system, their
interrelationships, and constraints (Garlan,
1
1994; Perry, 1992). Many authors have
pointed out that the manner in which
software is commonly built results in the
refinement of certain families, styles, or
patterns of architecture (Gamma, 1995;
Garlan, 1993). Based on this observation
one can derive such benefits as the
development of a common design language,
semantic rules and constraints, and
architecture validity analysis by family.
Architecture Styles provide specific
approach to categorization. Bellanger
(1996) defines Architecture Styles as:
A set of operational characteristics
common to a family of a software
architecture and sufficient to identify
that family.
Furthering these ideas have been the
development of Pattern Languages for
software. A Pattern Language codifies
“well-proven design experience and
provides a common vocabulary” for
software design (Coplien, 1995). While
focus has usually been on the testability of
design patterns themselves (Buschman,
1996), some work has already been done to
extend these concepts to the task of software
testing. McGregor (1996), for example, has
developed a pattern language to support the
testing of components in Object-Oriented
(OO) software through the use of generic
test harness classes.
The relationship between Architecture
Styles and Patterns was demonstrated by
Tepfenhart (1997). Here, this view is
extended by introducing the concept of test
architecture, implemented as a class. Once
an application instance has been created of a
given architecture style and through “desing
pattern” guidance, a test architecture must
be built reflecting the application’s style and
construction. Using this concept one can
derive multiple test instances for each
software application under test (AUT) of a
particular architecture family. More
formally, test architecture is defined as:
The relationships and constrains
between the platforms, components,
and approaches, used in the design of
a Software Test Environment to
conduct
the
verification
and
validation of a software application.
By knowing the Architecture Style of a
given AUT a matching Test Architecture
can specify the required STE. Applying the
concept of an architecture style and the
mechanism of inheritance closely links test
environments to their target software
applications. By doing so test environment
design should be faster and reuse of test
components will be facilitated.
2.1 Tying Software Architectures to Test
Architectures
Within AT&T, the Network Computing
Services’ Operations Technology Center
typically builds systems which can be
classified in the telecommunications
domain. Specifically, these are network
provisioning, maintenance, and surveillance
applications. For this a organization a focus
on application domain and on architecture
style are present. Bellanger (1996)
demonstrated that within this particular
development center of AT&T five main
architecture styles dominate:
•
•
•
•
•
OLTP (On-Line Transaction
Processing)
Decision Support
Data Streaming
Real Time
Hybrid
Using these broad classes of architecture
styles a mapping to a matching Testing
Architecture as demonstrated in Figure 1 is
proposed.
Given
an
application
architecture, including requirements, the
instantiation of a Test Architecture Class
produces a complete Test Environment.
Figure 1 illustrates this process. Once the
target system’s architectural style is known
2
the base class defining general software test
architectures is used to derive the unique
test architecture class required to validate
the application under test. In Figure 2 a
more formal depiction is provided. Notice
that by using multiple-inheritance a large
variety of Hybrid applications can be
described.
Thus, if an OLTP architecture is being
constructed, a test environment suitably
matched to this particular system type must
be developed. The characteristics of the
ideal STE for an OLTP application will be
quite different from those of a Data
Streaming architecture. In essence, the STE
can be viewed as an application platform
into which another software system can be
inserted for the express purpose of verifying
its contours and the validity of its reactions.
The Software Architecture defines the
application’s contours and it must be
mapped into an STE whose outline is the
negative image of the application
architecture.
A P P LIC A T IO N
A R C H IT E C T U R E S T Y L E S
SO FTW A R E TE ST
E N V IR O N M E N T
O LTP
D e cisio n S up p o rt
D a ta S tre a m in g
R e a l T im e
T E S T A R C H IT E C T U R E
C LASS
H yb rid
Figure 1: Architecture Styles Related to Test Environment Architectures.
Symbolically the architecture styles can be viewed as a collection of distinctive
shapes or icons. Beyond this simplification they are in fact are composed of
specific detailed constructs. Using a generic test class, and applying the unique
characteristics of the architecture style, can generate a suitable STE for a given
architecture. This can reduce STE design effort though design guidance and
reuse of established resources as well as reducing potential dissonance between
the application architecture and its test environment.
3
3. MAPPING ARCHITECTURES TO TEST
BEDS
Much of this can be summed up by saying
“build an environment that suites the
application under test”. This is often easier
said than done. But one advantage of
application
characterization
is
the
identification of common infrastructure
requirements including software testing
architecture attributes. While one generic
Test
Architecture
cannot
define
requirements for all STEs, compiling a
baseline of commonly shared test
environment features should speed the
design of a specific STE.
3.1 An architecture reference mask
To facilitate the definition of both
architectures and test environments an
adaptation of a reference model mask
(RMM) used by Zelkowitz (1996) in
describing
Software
Engineering
Environments (SEEs) and Project Support
Environments (PSEs) is presnted. Zelkowitz
describes a bit vector which enumerates the
supported services of an SEE or a PSE. This
vector can then be used, along with the
vectors
representing
individual
implementations of a given PSE or SEE, for
a variety of purposes. For example, one can
determine the relative merits of one
environment
over
another
or
the
completeness of an implementation’s
coverage of the ideal service model.
This same approach can be used in quickly
assessing the STE requirements for a
software architecture of a given style1. The
AUT must be defined in a catalogue of
architecture styles which would include its
own
reference
model
mask.
The
characterization of each style is obviously a
major undertaking. Fortunately, this is well
underway within the Patterns community.
To complete the process, a Test
Architecture mask would also be required to
complement each architecture pattern
identified. These masks would describe the
potential test services and tools for each
software architecture. The ideal relationship
would require that the architecture reference
mask would include common features
across all architectures as well as the
specific characteristics of each individual
style. Meanwhile, the test architecture
reference mask would include common
features required of any test environment
(e.g., test case management) as well as the
specific characteristics or the desired test
environment as determined by the
application architecture (e.g., the need for
data comparators).
3.2 A simple example
In Table 1 a sample reference vector for the
Data Streaming style is presented. This
example’s architecture characteristics are
not meant to reflect a complete set nor are
they tailored to a system instance. Included
in the sample are simply the defining
characteristics of the input-process-output
model typical of this family of applications.
By creating a bit mask of all the possible
characteristics of this architecture, it is then
possible to assess the capabilities of our
existing test environments to support such
an application or how they would need
expansion in order to do so. Table 2 reflects
the matching test capabilities required to
build a test architecture for this example.
Notice that certain test characteristics are
intrinsic to the job of testing (common to all
test efforts) and thus reside in a “base class”
while some test requirements are unique to
the application’s architecture.
1
Current work in Architecture Description Languages
(ADL) have yet to reach maturity (Monroe, 1997). If
an OO-ADL were to gain popularity it is likely that it
would be able to support this notation.
4
A rc h ite c tu re
S t y le
D e c is io n
S u p p o rt
O L T P
H y b rid
D a ta
S tr e a m in g
R e a l
T im e
H y b rid
A
B
T e s t
A rc h ite c tu re
D e c is io n
S u p p o rt
S T E
O L T P
S T E
H y b rid
S T E
A
D a ta
S tr e a m in g
S T E
R e a l
T im e S T E
H y b rid B
S T E
Figure 2: Architecture Styles and Test Environments as Classes. Using a class
diagram architecture styles and the corollary test environments for each can be
seen more formally than in Figure 1. Here a more straightforward view of how
each concrete architecture might share certain global characteristics and how
hybrid architectures can be formed appears. Also, the test environments can be
viewed in precisely the same manner using the Test Architecture class as the
root of the tree.
3.3 Architecture reference vectors
In the terms given to us by Zelkowitz, we
can view our newly cast Architecture RMM
as the vector A which represents the global
set of architecture characteristics. A given
architecture style, such as the Data
Streaming example, would be described by
Ai. Further, given features within a specific
architecture type would be described by Ai,j
where Ai represents the architecture style
and Ai,j represents the specific characteristic.
This gives us the expression Ai A which
describes the instance architecture for a
given style as derived from the reference
mask A. This operation pulls the appropriate
subset of characteristics from the
architecture reference class.
Following from these assumptions, the
generic Test Architecture given by the
vector T would then allow for the
description of a custom fitted STE for
application architecture Ai by allowing the
statement Ti T. Here, Ti reflects the
inverse characteristics of the specific test
environment required for an architecture of
style Ai.
Simply stated, we can now
determine STE requirements using the
intersection
of
the
application’s
architecturally described test needs and the
test
architecture
reference
class.
Furthermore, it is possible to test for the
inclusion or availability of architectural
features or test environment capabilities
using the expression Ai,j Ti,j.
One may visualize these reference masks as
multidimensional
arrays
for
each
architecture. Thus the “base class”
characteristics for any architecture would be
modified by the “instance” characteristics
for each architecture style as such:
5
Sample Reference Vector Components:
Data Streaming Architecture Style
•
Architecture Class Derived System Characteristics
Data Reception
data type (ASCII, binary, relational bulk transfer)
format protocol (negotiated, standard)
number of sources (specified)
periodicity (continuous, hourly, daily, monthly)
communications protocol (ftp, uucp, rcp)
communications speed (specified)
Processing specifics
algorithmic type (sort, scan, merge, conversion, computation)
data intermediaries (file, memory, tape)
time budget
Data Transmission
data type (ASCII, binary, relational bulk transfer)
format protocol (negotiated, standard)
number of targets (specified)
periodicity (continuous, hourly, daily, monthly)
communications protocol (ftp, uucp, rcp)
communications speed (specified)
TABLE 1: Data Streaming Reference Vector
A[OLTP][feature1, feature2, … feature n]
A[Data Streaming][feature1, feature2, … feature n]
A[Decision Support][feature1, feature2, … feature n]
A[Realtime][feature1, feature2, … feature n]
A[Hybrid]{[Realtime][ … feature n][Decision
Support][ … feature n]}
At the same time for the test reference
architecture we have the corresponding
environments described as:
T[OLTP][test requirement 1, 2, …n]
T[Data Streaming] [test requirement 1, 2, …n]
T[Decision Support] [test requirement 1, 2, …n]
T[Realtime] [test requirement 1, 2, …n]
T[Hybrid]
{[Realtime][ … n][Decision Support][ … n]}
Some properties of these vectors may not be
readily apparent. First, it is assumed that
there are some shared capabilities between
all architectures. Thus the reference mask A
contains characteristics in addition to those
needed to describe each style instance. This
holds true also for the test architecture
reference mask T. Also, to derive a Hybrid
style, one or more concrete styles must be
combined. In such cases the standard style
mask may need modification as all
characteristics of a style may not be
required in a Hybrid design. Finally, in
deriving a test environment some necessary
traits of the environment cannot be mapped
directly to the application architecture under
test. For instance the Operational Profile of
an application can be mapped to the test
case requirements and their order of
execution. However, the fact that these test
cases must be stored, retrieved, and
managed does not follow from the
application architecture itself. In these cases
the reference architecture T must provide
additional services for the test environment
which are typically the same regardless of
the target application.
6
Sample Reference Test Vector Components:
Data Streaming Architecture Style
•
Base Class Derived (Intrinsic) Test Needs
Process Derived Test Needs
operational profile(determines run characteristics)
test methods and procedures(specified)
test intervals(specified)
failure and fault intensity objectives(specified)
resource utilization validation(specified)
heuristics(local magic)
The Test Bus
test database(state maintainence)
sanity suites(checklists)
test results(recording, analysis)
test environment(construction, validation, calibration)
test-bed hardware/software preconditions(specified)
•
Architecture Class Derived (Specific) Test Needs
Data Reception Test Needs
test data generation type (ASCII, binary, relational bulk transfer)
test format protocol (negotiated, standard)
number of external system simulation or test-bed sources (specified)
test periodicity (continuous, hourly, daily, monthly)
test communications protocol (ftp, uucp, rcp)
test communications speed (specified)
Processing Test Needs
algorithmic functional validation criteria (alpha sort, reverse sort, etc)
algorithmic comparator (alpha sort, revese sort, etc)
algorithmic performance validation criteria (time budgets)
data intermediaries validation (file, memory, tape)
Data Transmission Test Needs
expected data type (ASCII, binary, relational bulk transfer)
expected format protocol (negotiated, standard)
interface verification of number of targets (specified)
expected periodicity (continuous, hourly, daily, monthly)
communications protocol verification (ftp, uucp, rcp)
communications speed verification (specified)
test output capture and comparison(ASCII, binary)
pass/fail output enumeration(specified)
TABLE 2: Data Streaming Test Environment Reference Vector
4. TEST ARCHITECTURE FRAMEWORK
In order to further understand the impact of
designing test environments from the
perspective of software architecture styles
an extended STE (below) includes
application specific test needs. This
environment extends STEs described by
Vogel (1993) and Eickelmann (1996). In
Figure 3 a Test Management substrate
supports the typical activities of repository
management, configuration management,
and template storage. Test activities of
design, development, execution, and
measurement call upon these services.
Adding to this environment the application
specific test requirements as shown in the
modules to the right completes the view.
Using the concept of the test vector it is
possible to determine the additional test
resources or the nature of the test suites
required by the inclusion of these
architecture modules.
7
Test
Design
Test
Development
Test
Execution
Test
Measurement
Arch
Style
Ti
Arch
Style
Tn
Test Management
Object
Repository
Configuration
Management
Rules,
Templates
Use Cases,
Patterns
Figure 3: Test Architecture Framework with Architecture Styles. Underlying
any STE are repositories of software, test scripts, and templates. These are used
to develop test suites and to manage and evaluate results. In addition to these
standard features of an STE each application under test brings with it a set of
architecture specific test needs as shown to the right.
4.1 Test Management
Test
Management
provides
the
infrastructure to automate and document the
entire application testing life-cycle. These
methodologies and technologies manage
software testing assets and can be either
manual or mechanized. Project testing assets
include the process and technology
platforms, test targets, test data and the
application tests. Test Process and
Technology Platforms are the procedures
and tools used for test
organization,
execution and analysis. Test Targets,
sometimes referenced as Test-beds, are the
fully integrated software and hardware
components that represent the production
environment. The Test Target includes the
application
defined
Infrastructure
configuration with the actual AUT, or
Application Under Test.
Physical Test Suites are applied to the Test
Target, populated with Test Data, to
measure the intersection of System
Objectives
with
actual
Application
Presentation and Behavior. Frequency,
capacity, format and content are some of the
parameters that are used to build expected
and erroneous test data sets. This test data is
generated for external interface verification
and internal data source population. The
application tests are the methods or
programs employed to exercise the AUT,
with respect to the system requirements and
objectives. Tests have context that map to
the objectives. These objectives can include
application unit testing, regression testing,
and
load
or
performance
testing
requirements. The pass and fail criteria of
application tests reflect the quantitative and
qualitative measurement of the AUT
implementation coverage of the respective
system objectives. The following sections
outline the components of a General Test
Management Class.
• Test Object Repository
The data store that houses information and
assets, such as project, common, historical,
current state, dependency and testcase data,
is referenced as the Test Object Repository.
8
• Test Scripts
Test Scripts are the information, methods
and programs required to determine
specified pass and fail criteria.
• Test Data
Test Data is composed of the static and
dynamic information processed from
internal or external sources that determines
the AUT state.
• Test Results
The current state and historical information,
that specify the AUT quantitative and
qualitative
attainment
of
system
requirements at a moment or in a period of
time.
• Configuration Management
Configuration Management is the process,
technology and integration required to
manage change to the Test Environment.
Some of the attributes of the Configuration
Management component include tracking
deltas to any test object, maintaining change
requests with associated states, version
capabilities, build facilities and integration
with the analogous processes in the
Application Development Environment.
• Rules & Templates
Rules and triggers provide the mechanism
for configuration of application specific
behavior of the Test Management
component. This customization capability
allows for definition of the test objects
actions and appearance through a flexible
interface. An example rule might include
automated execution of test suites based on
recognized application state change events.
Templates are the skeletal outlines of test
objects. Included in the templates are Test
attributes and default populated values,
where assumptions are appropriate.
• Use Cases & Patterns
Use Cases are the documented Application
Operability and Usability requirements.
These Use Cases are employed to design
Test Objectives that will map into test pass
and fail criteria.
4.2 Test Design
The process of planning application testing
is referred to as Test Design. Some of the
activities include determination of test
objectives, performing application risk
analysis, specification of entrance criteria,
specification of exit criteria, calculation of
required resources, identification of
required test cases and test suites,
preparation of test infrastructure and the
formal description of the test methodology
to be employed.
This part of the environment would cover
the following areas: Test Plans &
Specifications, Test Generation, Software
Reliability Engineered Testing (including
Operational & Usage Profiling). Details of
these practices are covered amply in other
sources. For this discussion it is sufficient to
mention that these process steps must be
fulfilled.
4.3 Test Development
Test Development is typically the most
resource intensive and complex phase of the
application test cycle. The creative task of
writing,
organizing
and
verifying
correctness of Test Cases can be as or more
complicated and expensive than developing
the AUT. The art of reducing the
complexity and required resources weighs
heavily on the insertion of proven
automation technology and methodologies
at appropriate intervals and for the most
reusable test objectives.
The Test Development process provides
mechanisms and methodologies to create
test cases and test suites. Test Cases are the
smallest entities, or test objects, that are
applied the AUT. Test Suites are logical or
physical sets of test cases that are usually
characterized by some common test
objective or application component, that the
9
Test Suite was designed to address. For
example, Unit, Functional, Regression,
Load, Stress and Performance are Test
Suites that make up intersecting subsets of
the universal set of test cases for an
application. Each of these groupings is
designed to address a specific test need and
have an identifiable list of characteristics.
There are tools and practices that are
designed to meet the automation common
needs of particular test case types. For
example record and playback technologies
are available to provide a test designer with
an
Integrated
Test
Development
Environment to capture and program test
methods, expected and actual application
states.
model. This framework of a Test
Architecture was shown to support several
domain based architecture styles.
The next step of codifying these
relationships will increase the speed with
which test environments can be designed
and reduce the dissonance between them
and their test target architectures. With such
a classification, perhaps supported by a
Pattern Language, families of related test
architectures would contribute a powerful
set of abstractions for conducting
verification and validation. Doing so would
further integrate the processes of design and
test.
4.5 Test Measurement
The collection and analysis of test results is
commonly referred to as Test Measurement.
Test Measurement must cover such aspects
as Test Coverage, Defect Tracking, and
Failure & Fault Intensity Objectives
6. ACNOWLEDGEMENTS
The author is indebted to Brendan Conway
of The Gartner Group whose description of
testing architectures stimulated work on this
paper. Carl Olson of Lucent Technologies’
Bell Labs assisted significantly in the
development of these ideas while still with
AT&T. Tom Grau, recently of Bell
Laboratories, and William Tepfenhart of
Monmouth University and AT&T, also
helped to advance these ideas. Comments
provided by John McGregor of Clemson
University were also essential in developing
this paper.
5. CONCLUSIONS
Software architecture styles can be related
to Software Test Environments as shown.
The views of the mapping between an
architecture and its test environment
described a construct for deriving the later
from the former. This approach proposes
analytical and test design benefits including
test artifact reuse and an increased
awareness of the test needs of an application
architecture. This model essentially
provides the high level test plan required for
a system of a known architecture type. Test
designs can also be shaped to more clearly
facilitate reuse by organizing under this
7. REFERENCES
[1] Bellanger, D., "Architecture Styles: An
Experiment on SOP," AT&T Technical
Journal, Jan./Feb. 1996, pp. 54-63.
[2] Buschmann, F., et. al., PatternOriented Software Architecture: A
System of Patterns, John-Wiley &
Sons, NY, 1996.
[3] Coplien, J., & Schmidt, D., eds.,
Pattern Languages of Program
Design, Addison-Wesley, New York,
1995.
[4] Eickelmann, N., & Richardson, D., “An
Evaluation
of
Software
Test
Environment
Architectures”,
4.4 Test Execution
The most simplistic explanation of Test
Execution is the extraction and launching of
test cases and test suites on a single or
distributed set of test-beds in the static and
run-time environments of the AUT.
10
Proceedings of ICSE-18, Berlin, IEEE
Press, 1996.
[5] Gamma, E., et al., Design PatternsMicroarchitectures for Reusable
Object-Oriented Software, AddisonWesley, Reading, Mass., 1995.
[6] Garlan, D., and Shaw, M., “An
Introduction to Software Architecture”,
Advances in Software Engineering,
Volume I, eds., Ambriola, V., and
Tortora, G., World Scientific Publishing
Co., New Jersey, 1993.
[7] McGregor, J., & Kare, A., “Testing
Object-Oriented
Components”,
th
Proceedings of the 17 International
Conference on Testing Computer
Software, June, 1996.
[8] Perry, D., & Wolf, A., “Foundations for
the Study of Software Architecture”,
Software Engineering Notes, vol. 17,
no. 4, pp. 40-52, October, 1992.
[9] Tepfenhart, W., & Cusick, J., “A
Unified Object Topology”, IEEE
Software, vol. 14, No. 1, pp. 31-35,
January/February 1997.
[10]Vogel, P., “An Integrated General
Purpose Automated Test Environment”,
Proceedings of the 1993 International
Symposium on Software Testing and
Analysis, pp. 61-69, Cambridge, MA,
June 1993.
[11] Zelkowitz, M., “Modeling Software
Engineering
Environment
Capabilities”, Journal of Systems
Software, October 1996, V35 N1, pp.
3-14.
11