TY - GEN
T1 - SLAQ
T2 - 2017 Symposium on Cloud Computing, SoCC 2017
AU - Zhang, Haoyu
AU - Stafman, Logan
AU - Or, Andrew
AU - Freedman, Michael J.
N1 - Publisher Copyright:
© 2017 Association for Computing Machinery.
PY - 2017/9/24
Y1 - 2017/9/24
N2 - Training machine learning (ML) models with large datasets can incur significant resource contention on shared clusters. This training typically involves many iterations that continually improve the quality of the model. Yet in exploratory settings, better models can be obtained faster by directing resources to jobs with the most potential for improvement. We describe SLAQ, a cluster scheduling system for approximate ML training jobs that aims to maximize the overall job quality. When allocating cluster resources, SLAQ explores the quality-runtime trade-offs across multiple jobs to maximize system-wide quality improvement. To do so, SLAQ leverages the iterative nature of ML training algorithms, by collecting quality and resource usage information from concurrent jobs, and then generating highlytailored quality-improvement predictions for future iterations. Experiments show that SLAQ achieves an average quality improvement of up to 73% and an average delay reduction of up to 44% on a large set of ML training jobs, compared to resource fairness schedulers.
AB - Training machine learning (ML) models with large datasets can incur significant resource contention on shared clusters. This training typically involves many iterations that continually improve the quality of the model. Yet in exploratory settings, better models can be obtained faster by directing resources to jobs with the most potential for improvement. We describe SLAQ, a cluster scheduling system for approximate ML training jobs that aims to maximize the overall job quality. When allocating cluster resources, SLAQ explores the quality-runtime trade-offs across multiple jobs to maximize system-wide quality improvement. To do so, SLAQ leverages the iterative nature of ML training algorithms, by collecting quality and resource usage information from concurrent jobs, and then generating highlytailored quality-improvement predictions for future iterations. Experiments show that SLAQ achieves an average quality improvement of up to 73% and an average delay reduction of up to 44% on a large set of ML training jobs, compared to resource fairness schedulers.
KW - Approximate computing
KW - Machine learning
KW - Quality
KW - Resource management
KW - Scheduling
UR - http://www.scopus.com/inward/record.url?scp=85032447493&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85032447493&partnerID=8YFLogxK
U2 - 10.1145/3127479.3127490
DO - 10.1145/3127479.3127490
M3 - Conference contribution
AN - SCOPUS:85032447493
T3 - SoCC 2017 - Proceedings of the 2017 Symposium on Cloud Computing
SP - 390
EP - 404
BT - SoCC 2017 - Proceedings of the 2017 Symposium on Cloud Computing
PB - Association for Computing Machinery, Inc
Y2 - 24 September 2017 through 27 September 2017
ER -