TY - GEN
T1 - DFS
T2 - 8th USENIX Conference on File and Storage Technologies, FAST 2010
AU - Josephson, William K.
AU - Bongo, Lars A.
AU - Li, Kai
AU - Flynn, David
PY - 2010
Y1 - 2010
N2 - This paper presents the design, implementation and evaluation of Direct File System (DFS) for virtualized flash storage. Instead of using traditional layers of abstraction, our layers of abstraction are designed for directly accessing flash memory devices. DFS has two main novel features. First, it lays out its files directly in a very large virtual storage address space provided by FusionIO's virtual flash storage layer. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, DFS performs better and it is much simpler than a traditional Unix file system with similar functionalities. Our microbenchmark results show that DFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the virtualized flash storage layer on FusionIO's ioDrive. For direct access performance, DFS is consistently better than ext3 on the same platform, sometimes by 20%. For buffered access performance, DFS is also consistently better than ext3, and sometimes by over 149%. Our application benchmarks show that DFS outperforms ext3 by 7% to 250% while requiring less CPU power.
AB - This paper presents the design, implementation and evaluation of Direct File System (DFS) for virtualized flash storage. Instead of using traditional layers of abstraction, our layers of abstraction are designed for directly accessing flash memory devices. DFS has two main novel features. First, it lays out its files directly in a very large virtual storage address space provided by FusionIO's virtual flash storage layer. Second, it leverages the virtual flash storage layer to perform block allocations and atomic updates. As a result, DFS performs better and it is much simpler than a traditional Unix file system with similar functionalities. Our microbenchmark results show that DFS can deliver 94,000 I/O operations per second (IOPS) for direct reads and 71,000 IOPS for direct writes with the virtualized flash storage layer on FusionIO's ioDrive. For direct access performance, DFS is consistently better than ext3 on the same platform, sometimes by 20%. For buffered access performance, DFS is also consistently better than ext3, and sometimes by over 149%. Our application benchmarks show that DFS outperforms ext3 by 7% to 250% while requiring less CPU power.
UR - http://www.scopus.com/inward/record.url?scp=85077045323&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077045323&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85077045323
T3 - Proceedings of FAST 2010: 8th USENIX Conference on File and Storage Technologies
SP - 85
EP - 99
BT - Proceedings of FAST 2010
PB - USENIX Association
Y2 - 23 February 2010 through 26 February 2010
ER -