Abstract
Many modern computational approaches to classical problems in quantitative finance are formulated as empirical loss minimization (ERM), allowing direct applications of classical results from statistical machine learning. These methods, designed to directly construct the optimal feedback representation of hedging or investment decisions, are analyzed in this framework demonstrating their effectiveness as well as their susceptibility to generalization error. Use of classical techniques shows that over-training renders trained investment decisions to become anticipative, and proves overlearning for large hypothesis spaces. On the other hand, nonasymptotic estimates based on Rademacher complexity show the convergence for sufficiently large training sets. These results emphasize the importance of synthetic data generation and the appropriate calibration of complex models to market data. A numerically studied stylized example illustrates these possibilities, including the importance of problem dimension in the degree of overlearning, and the effectiveness of this approach.
Original language | English (US) |
---|---|
Pages (from-to) | 116-145 |
Number of pages | 30 |
Journal | Mathematical Finance |
Volume | 33 |
Issue number | 1 |
DOIs | |
State | Published - Jan 2023 |
All Science Journal Classification (ASJC) codes
- Accounting
- Finance
- Social Sciences (miscellaneous)
- Economics and Econometrics
- Applied Mathematics
Keywords
- ERM
- bias-variance trade-off
- deep learning
- dynamic hedging
- overlearning