TY - GEN
T1 - FAB
T2 - 2019 Workshop on Buffer Sizing, BS 2019
AU - Apostolaki, Maria
AU - Vanbever, Laurent
AU - Ghobadi, Manya
N1 - Publisher Copyright:
© 2019 Association for Computing Machinery.
PY - 2019/12/2
Y1 - 2019/12/2
N2 - Conventional buffer sizing techniques consider an output port with multiple queues in isolation and provide guidelines for the size of the queue. In practice, however, switches consist of several ports that share a buffering chip. Hence, chip manufacturers, such as Broadcom, are left to devise a set of proprietary resource sharing algorithms to allocate buffers across ports. This algorithm dynamically adjusts the buffer size for output queues and directly impacts the packet loss and latency of individual queues. We show that the problem of allocating buffers across ports, although less known, is indeed responsible for fundamental inefficiencies in today's devices. In particular, the per-port buffer allocation is an ad-hoc decision that (at best) depends on the remaining buffer cells on the chip instead of the type of traffic. In this work, we advocate for a flow-aware and device-wide buffer sharing scheme (FAB), which is practical today in programmable devices. We tested FAB on two specific workloads and showed that it can improve the tail flow completion time by an order of magnitude compared to conventional buffer management techniques.
AB - Conventional buffer sizing techniques consider an output port with multiple queues in isolation and provide guidelines for the size of the queue. In practice, however, switches consist of several ports that share a buffering chip. Hence, chip manufacturers, such as Broadcom, are left to devise a set of proprietary resource sharing algorithms to allocate buffers across ports. This algorithm dynamically adjusts the buffer size for output queues and directly impacts the packet loss and latency of individual queues. We show that the problem of allocating buffers across ports, although less known, is indeed responsible for fundamental inefficiencies in today's devices. In particular, the per-port buffer allocation is an ad-hoc decision that (at best) depends on the remaining buffer cells on the chip instead of the type of traffic. In this work, we advocate for a flow-aware and device-wide buffer sharing scheme (FAB), which is practical today in programmable devices. We tested FAB on two specific workloads and showed that it can improve the tail flow completion time by an order of magnitude compared to conventional buffer management techniques.
KW - Buffer management
KW - Data center
KW - Dynamic buffer threshold
KW - Dynamic partitioning
KW - Memory utilization
KW - Programmable data plane
KW - QoS guarantees
KW - Resource allocation
KW - Shared-memory switch
UR - http://www.scopus.com/inward/record.url?scp=85079850196&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85079850196&partnerID=8YFLogxK
U2 - 10.1145/3375235.3375237
DO - 10.1145/3375235.3375237
M3 - Conference contribution
AN - SCOPUS:85079850196
T3 - ACM International Conference Proceeding Series
BT - Proceedings of the 2019 Workshop on Buffer Sizing, BS 2019
PB - Association for Computing Machinery
Y2 - 2 December 2019 through 3 December 2019
ER -