• DocumentCode
    1918232
  • Title

    Poster: PLFS/HDFS: HPC Applications on Cloud Storage

  • Author

    Cranor, Chuck ; Polte, Milo ; Gibson, Garth

  • fYear
    2012
  • fDate
    10-16 Nov. 2012
  • Firstpage
    1410
  • Lastpage
    1410
  • Abstract
    Long running large scale HPC applications protect themselves from failures by periodically checkpointing their state to a single file stored in a distributed network filesystem. These filesystems commonly provide a POSIX-style interface for reading and writing files. HDFS is a filesystem used in cloud computing by Apache Hadoop. HDFS is optimized for Hadoop jobs that do not require full POSIX I/O semantics. Only one process may write to an HDFS file, and all writes are appends. Our work enables multiple HPC processes to checkpoint their state into an HDFS file using PLFS. PLFS is a middleware filesystem that converts random I/O into log-based I/O. We added a new I/O store layer to PLFS that allows it to use non-POSIX filesystems like HDFS as backing store. HPC applications can now checkpoint to HDFS, allowing HPC and cloud to share the same storage systems and work with each others data.
  • Keywords
    middleware; parallel I/O and storage systems;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    High Performance Computing, Networking, Storage and Analysis (SCC), 2012 SC Companion:
  • Conference_Location
    Salt Lake City, UT
  • Print_ISBN
    978-1-4673-6218-4
  • Type

    conf

  • DOI
    10.1109/SC.Companion.2012.223
  • Filename
    6496006