DFS: A file system for virtualized flash storage

William K. Josephson, Lars A. Bongo, Kai Li, David Flynn

Research output: Contribution to journalArticlepeer-review

86 Scopus citations

Abstract

We present the design, implementation, and evaluation of Direct File System (DFS) for virtualized flash storage. Instead of using traditional layers of abstraction, our layers of abstraction are designed for directly accessing flash memory devices. DFS has two main novel features. First, it lays out its files directly in a very large virtual storage address space provided by FusionIO's virtual flash storage layer. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, DFS performs better and is much simpler than a traditional Unix file system with similar functionalities. Our microbenchmark results show that DFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the virtualized flash storage layer on FusionIO's ioDrive. For direct access performance, DFS is consistently better than ext3 on the same platform, sometimes by 20%. For buffered access performance, DFS is also consistently better than ext3, and sometimes by over 149%. Our application benchmarks show that DFS outperforms ext3 by 7% to 250% while requiring less CPU power.

Original languageEnglish (US)
Article number14
JournalACM Transactions on Storage
Volume6
Issue number3
DOIs
StatePublished - Sep 2010

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture

Keywords

  • Filesystem
  • Flash memory

Fingerprint

Dive into the research topics of 'DFS: A file system for virtualized flash storage'. Together they form a unique fingerprint.

Cite this