TY - GEN
T1 - Bamboo
T2 - 20th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2023
AU - Thorpe, John
AU - Zhao, Pengzhan
AU - Eyolfson, Jonathan
AU - Qiao, Yifan
AU - Jia, Zhihao
AU - Zhang, Minjia
AU - Netravali, Ravi
AU - Xu, Guoqing Harry
N1 - Publisher Copyright:
© NSDI 2023.All rights reserved
PY - 2023
Y1 - 2023
N2 - DNN models across many domains continue to grow in size, resulting in high resource requirements for effective training, and unpalatable (and often unaffordable) costs for organizations and research labs across scales. This paper aims to significantly reduce training costs with effective use of preemptible instances, i.e., those that can be obtained at a much cheaper price while idle, but may be preempted whenever requested by priority users. Doing so, however, requires new forms of resiliency and efficiency to cope with the possibility of frequent preemptions - a failure model that is drastically different from the occasional failures in normal cluster settings that existing checkpointing techniques target. We present Bamboo, a distributed system that tackles these challenges by introducing redundant computations into the training pipeline, i.e., whereby one node performs computations over not only its own layers but also over some layers in its neighbor. Our key insight is that training large models often requires pipeline parallelism where “pipeline bubbles” naturally exist. Bamboo carefully fills redundant computations into these bubbles, providing resilience at a low cost. Across a variety of widely used DNN models, Bamboo outperforms traditional checkpointing by 3.7× in training throughput, and reduces costs by 2.4× compared to a setting where on-demand instances are used.
AB - DNN models across many domains continue to grow in size, resulting in high resource requirements for effective training, and unpalatable (and often unaffordable) costs for organizations and research labs across scales. This paper aims to significantly reduce training costs with effective use of preemptible instances, i.e., those that can be obtained at a much cheaper price while idle, but may be preempted whenever requested by priority users. Doing so, however, requires new forms of resiliency and efficiency to cope with the possibility of frequent preemptions - a failure model that is drastically different from the occasional failures in normal cluster settings that existing checkpointing techniques target. We present Bamboo, a distributed system that tackles these challenges by introducing redundant computations into the training pipeline, i.e., whereby one node performs computations over not only its own layers but also over some layers in its neighbor. Our key insight is that training large models often requires pipeline parallelism where “pipeline bubbles” naturally exist. Bamboo carefully fills redundant computations into these bubbles, providing resilience at a low cost. Across a variety of widely used DNN models, Bamboo outperforms traditional checkpointing by 3.7× in training throughput, and reduces costs by 2.4× compared to a setting where on-demand instances are used.
UR - http://www.scopus.com/inward/record.url?scp=85159358855&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85159358855&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85159358855
T3 - Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2023
SP - 497
EP - 513
BT - Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2023
PB - USENIX Association
Y2 - 17 April 2023 through 19 April 2023
ER -