TY - JOUR
T1 - End-to-End Learning to Warm-Start for Real-Time Quadratic Optimization
AU - Sambharya, Rajiv
AU - Hall, Georgina
AU - Amos, Brandon
AU - Stellato, Bartolomeo
N1 - Funding Information:
The author(s) are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing.
Publisher Copyright:
© 2023 R. Sambharya, G. Hall, B. Amos & B. Stellato.
PY - 2023
Y1 - 2023
N2 - First-order methods are widely used to solve convex quadratic programs (QPs) in real-time applications because of their low per-iteration cost. However, they can suffer from slow convergence to accurate solutions. In this paper, we present a framework which learns an effective warm-start for a popular first-order method in real-time applications, Douglas-Rachford (DR) splitting, across a family of parametric QPs. This framework consists of two modules: a feedforward neural network block, which takes as input the parameters of the QP and outputs a warm-start, and a block which performs a fixed number of iterations of DR splitting from this warm-start and outputs a candidate solution. A key feature of our framework is its ability to do end-to-end learning as we differentiate through the DR iterations. To illustrate the effectiveness of our method, we provide generalization bounds (based on Rademacher complexity) that improve with the number of training problems and the number of iterations simultaneously. We further apply our method to three real-time applications and observe that, by learning good warm-starts, we are able to significantly reduce the number of iterations required to obtain high-quality solutions.
AB - First-order methods are widely used to solve convex quadratic programs (QPs) in real-time applications because of their low per-iteration cost. However, they can suffer from slow convergence to accurate solutions. In this paper, we present a framework which learns an effective warm-start for a popular first-order method in real-time applications, Douglas-Rachford (DR) splitting, across a family of parametric QPs. This framework consists of two modules: a feedforward neural network block, which takes as input the parameters of the QP and outputs a warm-start, and a block which performs a fixed number of iterations of DR splitting from this warm-start and outputs a candidate solution. A key feature of our framework is its ability to do end-to-end learning as we differentiate through the DR iterations. To illustrate the effectiveness of our method, we provide generalization bounds (based on Rademacher complexity) that improve with the number of training problems and the number of iterations simultaneously. We further apply our method to three real-time applications and observe that, by learning good warm-starts, we are able to significantly reduce the number of iterations required to obtain high-quality solutions.
KW - generalization bounds
KW - Machine learning
KW - quadratic optimization
KW - real-time optimization
KW - warm-start
UR - http://www.scopus.com/inward/record.url?scp=85172924441&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85172924441&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85172924441
SN - 2640-3498
VL - 211
SP - 220
EP - 234
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 5th Annual Conference on Learning for Dynamics and Control, L4DC 2023
Y2 - 15 June 2023 through 16 June 2023
ER -