Abstract
First-order methods are widely used to solve convex quadratic programs (QPs) in real-time applications because of their low per-iteration cost. However, they can suffer from slow convergence to accurate solutions. In this paper, we present a framework which learns an effective warm-start for a popular first-order method in real-time applications, Douglas-Rachford (DR) splitting, across a family of parametric QPs. This framework consists of two modules: a feedforward neural network block, which takes as input the parameters of the QP and outputs a warm-start, and a block which performs a fixed number of iterations of DR splitting from this warm-start and outputs a candidate solution. A key feature of our framework is its ability to do end-to-end learning as we differentiate through the DR iterations. To illustrate the effectiveness of our method, we provide generalization bounds (based on Rademacher complexity) that improve with the number of training problems and the number of iterations simultaneously. We further apply our method to three real-time applications and observe that, by learning good warm-starts, we are able to significantly reduce the number of iterations required to obtain high-quality solutions.
Original language | English (US) |
---|---|
Pages (from-to) | 220-234 |
Number of pages | 15 |
Journal | Proceedings of Machine Learning Research |
Volume | 211 |
State | Published - 2023 |
Event | 5th Annual Conference on Learning for Dynamics and Control, L4DC 2023 - Philadelphia, United States Duration: Jun 15 2023 → Jun 16 2023 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability
Keywords
- Machine learning
- generalization bounds
- quadratic optimization
- real-time optimization
- warm-start