Abstract
As the performance gap between disks and microprocessors continues to increase, effective utilization of the file cache becomes increasingly important. Application-controlled file caching and prefetching can apply application-specific knowledge to improve file cache management. However, supporting application-controlled file caching and prefetching is nontrivial because caching and prefetching need to be integrated carefully, and the kernel needs to allocate cache blocks among processes appropriately. This article presents the design, implementation, and performance of a file system that integrates application-controlled caching, prefetching, and disk scheduling. We use a two-level cache management strategy. The kernel uses the LRU-SP (Least-Recently-Used with Swapping and Placeholders) policy to allocate blocks to processes, and each process integrates application-specific caching and prefetching based on the controlled-aggressive policy, an algorithm previously shown in a theoretical sense to be nearly optimal. Each process also improves its disk access latency by submitting its prefetches in batches so that the requests can be scheduled to optimize disk access performance. Our measurements show that this combination of techniques greatly improves the performance of the file system. We measured that the running time is reduced by 3% to 49% (average 26%) for single-process workloads and by 5% to 76% (average 32%) for multiprocess workloads.
Original language | English (US) |
---|---|
Pages (from-to) | 311-343 |
Number of pages | 33 |
Journal | ACM Transactions on Computer Systems |
Volume | 14 |
Issue number | 4 |
DOIs | |
State | Published - Nov 1996 |
All Science Journal Classification (ASJC) codes
- Computer Science(all)
Keywords
- C.4 [Computer Systems Organization]: Performance of Systems - Design studies
- D.4.2 [Operating Systems]: Storage Management - Secondary storage
- D.4.3 [Operating Systems]: File System Management - Access methods
- Storage hierarchies