page 1  (15 pages)
2to next section

Measurements of a Distributed File System

Mary G. Baker, John H. Hartman, Michael D. Kupfer, Ken W. Shirriff, and John K. Ousterhout

Computer Science Division
Electrical Engineering and Computer Sciences University of California, Berkeley, CA 94720


We analyzed the user-level file access patterns and caching behavior of the Sprite distributed file system. The first part of our analysis repeated a study done in 1985 of the BSD UNIX file system. We found that file throughput has increased by a factor of 20 to an average of 8 Kbytes per second per active user over 10-minute intervals, and that the use of process migration for load sharing increased burst rates by another factor of six. Also, many more very large (multi-megabyte) files are in use today than in 1985. The second part of our analysis measured the behavior of Sprite's main-memory file caches. Client-level caches average about 7 Mbytes in size (about one-quarter to onethird of main memory) and filter out about 50% of the traffic between clients and servers. 35% of the remaining server traffic is caused by paging, even on workstations with large memories. We found that client cache consistency is needed to prevent stale data errors, but that it is not invoked often enough to degrade overall system performance.

1. Introduction

In 1985 a group of researchers at the University of California at Berkeley performed a trace-driven analysis of the UNIX 4.2 BSD file system [11]. That study, which we call ``the BSD study,'' showed that average file access rates were only a few hundred bytes per second per user for engineering and office applications, and that many files had lifetimes of only a few seconds. It also reinforced commonly-held beliefs that file accesses tend to be sequential, and that most file accesses are to short files but the majority of bytes transferred belong to long files. Lastly, it used simulations to predict that main-memory file caches of a few megabytes could substantially reduce disk I/O (and hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh

The work described here was supported in part by the National Science Foundation under grant CCR-8900029, the National Aeronautics and Space Administration and the Defense Advanced Research Projects Agency under contract NAG2-591, and an IBM Graduate Fellowship Award.

This paper will appear in the Proceedings of the 13th ACM Symposium on Operating Systems Principles.

server traffic in a networked environment). The results of this study have been used to justify several network file system designs over the last six years.

In this paper we repeat the analysis of the BSD study and report additional measurements of file caching in a distributed file system. Two factors motivated us to make the new measurements. First, computing environments have changed dramatically over the last six years, from relatively slow time-shared machines (VAX-11/780s in the BSD study) to today's much faster personal workstations. Second, several network-oriented operating systems and file systems have been developed during the last decade, e.g. AFS [4], Amoeba [7], Echo [3], Locus [14], NFS [16], Sprite [9], and V [1]; they provide transparent network file systems and, in some cases, the ability for a single user to harness many workstations to work on a single task. Given these changes in computers and the way they are used, we hoped to learn how file system access patterns have changed, and what the important factors are in designing file systems for the future.

We made our measurements on a collection of about 40 10-MIPS workstations all running the Sprite operating system [9, 12]. Four of the workstations served as file servers, and the rest were diskless clients. Our results are presented in two groups. The first group of results parallels the analysis of the BSD study. We found that file throughput per user has increased substantially (by at least a factor of 20) and has also become more bursty. Our measurements agree with the BSD study that the vast majority of file accesses are to small files; however, large files have become an order of magnitude larger, so that they account for an increasing fraction of bytes transferred. Many of the changes in our measurements can be explained by these large files. In most other respects our measurements match those of the BSD study: file accesses are largely sequential, files are typically open for only a fraction of a second, and file lifetimes are short.

Our second set of results analyzes the main-memory file caches in the Sprite system. Sprite's file caches change size dynamically in response to the needs of the file and virtual memory systems; we found substantial cache size variations over time on clients that had an average cache size of about 7 Mbytes out of an average of 24 Mbytes of main memory. About 60% of all data bytes read by

July 25, 1991 - 1 -