TY - GEN
T1 - Gemel
T2 - 20th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2023
AU - Padmanabhan, Arthi
AU - Agarwal, Neil
AU - Iyer, Anand
AU - Ananthanarayanan, Ganesh
AU - Shu, Yuanchao
AU - Karianakis, Nikolaos
AU - Xu, Guoqing Harry
AU - Netravali, Ravi
N1 - Publisher Copyright:
© NSDI 2023.All rights reserved
PY - 2023
Y1 - 2023
N2 - Video analytics pipelines have steadily shifted to edge deployments to reduce bandwidth overheads and privacy violations, but in doing so, face an ever-growing resource tension. Most notably, edge-box GPUs lack the memory needed to concurrently house the growing number of (increasingly complex) models for real-time inference. Unfortunately, existing solutions that rely on time/space sharing of GPU resources are insufficient as the required swapping delays result in unacceptable frame drops and accuracy loss. We present model merging, a new memory management technique that exploits architectural similarities between edge vision models by judiciously sharing their layers (including weights) to reduce workload memory costs and swapping delays. Our system, Gemel, efficiently integrates merging into existing pipelines by (1) leveraging several guiding observations about per-model memory usage and inter-layer dependencies to quickly identify fruitful and accuracy-preserving merging configurations, and (2) altering edge inference schedules to maximize merging benefits. Experiments across diverse workloads reveal that Gemel reduces memory usage by up to 60.7%, and improves overall accuracy by 8-39% relative to time or space sharing alone.
AB - Video analytics pipelines have steadily shifted to edge deployments to reduce bandwidth overheads and privacy violations, but in doing so, face an ever-growing resource tension. Most notably, edge-box GPUs lack the memory needed to concurrently house the growing number of (increasingly complex) models for real-time inference. Unfortunately, existing solutions that rely on time/space sharing of GPU resources are insufficient as the required swapping delays result in unacceptable frame drops and accuracy loss. We present model merging, a new memory management technique that exploits architectural similarities between edge vision models by judiciously sharing their layers (including weights) to reduce workload memory costs and swapping delays. Our system, Gemel, efficiently integrates merging into existing pipelines by (1) leveraging several guiding observations about per-model memory usage and inter-layer dependencies to quickly identify fruitful and accuracy-preserving merging configurations, and (2) altering edge inference schedules to maximize merging benefits. Experiments across diverse workloads reveal that Gemel reduces memory usage by up to 60.7%, and improves overall accuracy by 8-39% relative to time or space sharing alone.
UR - http://www.scopus.com/inward/record.url?scp=85147551913&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147551913&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85147551913
T3 - Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2023
SP - 973
EP - 994
BT - Proceedings of the 20th USENIX Symposium on Networked Systems Design and Implementation, NSDI 2023
PB - USENIX Association
Y2 - 17 April 2023 through 19 April 2023
ER -