Abstract
Large online service providers (OSPs) often build private backbone networks to interconnect data centers in multiple locations. These data centers house numerous applications that produce multiple classes of traffic with diverse performance objectives. Applications in the same class may also have differences in relative importance to the OSP's core business. By controlling both the hosts and the routers, an OSP can perform both application rate-control and network routing. However, centralized management of both rates and routes does not scale due to excessive message-passing between the hosts, routers, and management systems. Similarly, fully-distributed approaches do not scale and converge slowly. To overcome these issues, we investigate two semi-centralized designs that lie at practical points along the spectrum between fully-distributed and fully-centralized solutions. We achieve scalability by distributing computation across multiple tiers of an optimization machinery. Our first design uses two tiers, representing the backbone and classes, to compute class-level link bandwidths and application sending rates. Our second design has an additional tier representing individual data centers. Using optimization, we show that both designs provably maximize the aggregate utility over all traffic classes. Simulations on realistic backbones show that the 3-tier design is more scalable, but converges slower than the 2-tier design.
Original language | English (US) |
---|---|
Article number | 6678113 |
Pages (from-to) | 2673-2684 |
Number of pages | 12 |
Journal | IEEE Journal on Selected Areas in Communications |
Volume | 31 |
Issue number | 12 |
DOIs | |
State | Published - Dec 2013 |
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications
- Electrical and Electronic Engineering
Keywords
- Multiple traffic classes
- Traffic management
- data centers
- optimization
- scalability