page 1  (31 pages)
2to next section

-1-

to appear in the 1993 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems A New Approach to I/O Performance Evaluation?

Self-Scaling I/O Benchmarks, Predicted I/O Performance
Peter M. Chen
David A. Patterson
Computer Science Division, Dept. of EECS
University of California, Berkeley
[email protected], [email protected]

Abstract. Current I/O benchmarks suffer from several chronic problems: they quickly become obsolete, they do not stress the I/O system, and they do not help in understanding I/O system performance. We propose a new approach to I/O performance analysis. First, we propose a self-scaling benchmark that dynamically adjusts aspects of its workload according to the performance characteristic of the system being measured. By doing so, it automatically scales across current and future systems. The evaluation aids in understanding system performance by reporting how performance varies according to each of five workload parameters. Second, we propose predicted performance, a technique for using the results from the self-scaling evaluation to quickly estimate the performance for workloads that have not been measured. We show that this technique yields reasonably accurate performance estimates and argue that this method gives a far more accurate comparative performance evaluation than traditional single point benchmarks. We apply our new evaluation technique by measuring a SPARCstation 1+ with one SCSI disk, an HP 730 with one SCSI-II disk, a Sprite LFS DECstation 5000/200 with a four-disk disk array, a Convex C240 minisupercomputer with a four disk array, and a Solbourne 5E/905 fileserver with a four disk array.

1. Introduction

As processors continue to improve their performance faster than I/O devices [Patterson88], I/O will increasingly become the system bottleneck. There is therefore an increased need to understand and compare the performance of I/O systems, hence the need for I/O-intensive benchmarks. The benefits of good benchmarks are well understood. When benchmarks are representative of users' applications, they channel vendor optimization and research efforts into improvements that benefit users. Good benchmarks also assist users in purchasing machines by allowing fair, relevant comparisons.

Recent efforts to standardize benchmarks, such as SPEC [Scott90] and Perfect Club [Berry89], have increased our understanding of computing performance and helped create a fair playing field on which companies can compete. These standardization efforts have focused on CPU-intensive applications [Scott90], however, and intentionally avoided I/O intensive applications [Berry89].

In this paper, we develop criteria for ideal I/O benchmarks and show how current I/O benchmarks fall short of these. We then describe a new approach to I/O benchmarks?a self-scaling benchmark, which dynamically adjusts its workload to the system being measured, and predicted performance, which