page 1  (12 pages)
2to next section

Selecting Regression Tests for Object-Oriented Software?

Gregg Rothermel and Mary Jean Harrold

Department of Computer Science

Clemson University

Clemson, SC, 29634-1906

fgrother, [email protected]

Abstract

Regression testing is an important but expensive software maintenance activity aimed at providing confidence in modified software. Selective retest methods reduce the cost of regression testing by selecting tests for a modified program from a previously existing test suite. Many researchers have addressed the selective retest problem for procedural-language software, but few have addressed the problem for object-oriented software. In this paper, we present a new technique for selective retest, that handles object-oriented software. Our algorithm constructs dependence graphs for classes and applications programs, and uses these graphs to determine which tests in an existing test suite can cause a modified class or program to produce different output than the original. Unlike previous selective retest techniques, our method applies to modified and derived classes, as well as to applications programs that use modified classes. Our technique is strictly code-based, and makes no assumptions about methods used to specify or test the software initially.

1 Introduction

Regression testing is applied to modified software to provide confidence that modified code behaves as intended, and does not adversely affect the behavior of unmodified code. Regression testing plays an integral role in software maintenance; without proper regression testing we are reluctant to release modified software. One characteristic distinguishing regression testing from developmental testing is the availability, at regression test time, of existing test suites. If we reuse such test suites to retest a modified program, we can reduce the effort required to perform that testing. Unfortunately, test suites can be large, and we may not have time to rerun all tests in such suites. Thus, we must often restrict our efforts to a subset of the previously existing tests. We call the problem of choosing an appropriate subset of an existing test suite the selective retest problem; we call a method for solving this problem a selective retest method.

Although many researchers have addressed the selective retest problem for procedural-language software[2, 3, 5, 9, 11, 15, 16, 18, 20, 24, 26, 29, 30],

?This work was partially supported by NSF under Grants CCR- 9109531 and CCR-9357811 to Clemson University.

we are aware of only one technique that addresses the problem with respect to object-oriented software[7], and that approach applies only to test selection for derived classes. The emphasis on code reuse in the object-oriented paradigm both increases the cost of regression testing, and provides greater potential for obtaining savings by using selective retest methods. When a class is modified, the modifications impact every applications program that uses the class and every class derived from the class; ideally, we should retest every such program and derived class[25, 28]. The object-oriented paradigm also alters the focus of test selection algorithms, emphasizing and creating different concerns. For example, since most classes consist of small interacting methods, selective retest approaches for object-oriented programs must work at the interprocedural level. Also, since many methods for testing object-oriented software treat classes as testable entities, and design or employ suites of class tests for classes[6, 7, 12, 25, 27], selective retest methods must support the use of class tests.

In this paper, we present a new selective retest method that addresses the selective retest problem for object-oriented software. Our method constructs dependence graphs for classes and programs that use classes; we use these graphs to select all tests in a test suite that may cause a modified class, derived class, or applications program that uses a class to produce different output than the original program or class.

Our approach has several benefits. First, our method is currently the only selective retest method applicable to test selection for applications programs, classes, and derived classes. Second, our method selects tests using information gathered by code analysis, and does not require the specifications on which the code is based. Third, our approach is independent of the method used to generate tests initially for programs and classes. Fourth, unlike most selective retest methods, our method selects every test that may produce different output in the modified program. Fifth, unlike many selective retest algorithms, our approach handles both structural and nonstructural program modifications, and processes multiple modifications with a single application of the algorithm. Sixth, where most selective retest methods function at the unit test level, our approach works interprocedurally { a necessity where test selection for classes is concerned. Finally, our method is automatable.