TY - GEN
T1 - Learning fast algorithms for linear transforms using butterfly factorizations
AU - Dao, Tri
AU - Gu, Albert
AU - Eichhorn, Matthew
AU - Rudra, Atri
AU - Ré, Christopher
N1 - Publisher Copyright:
© 36th International Conference on Machine Learning, ICML 2019. All rights reserved.
PY - 2019
Y1 - 2019
N2 - Fast linear transforms are ubiquitous in machine learning, including the discrete Fourier transform, discrete cosine transform, and other structured transformations such as convolutions. All of these transforms can be represented by dense matrix-vector multiplication, yet each has a specialized and highly efficient (subquadratic) algorithm. We ask to what extent hand-crafting these algorithms and implementations is necessary, what structural priors they encode, and how much knowledge is required to automatically learn a fast algorithm for a provided structured transform. Motivated by a characterization of matrices with fast matrix-vector multiplication as factoring into products of sparse matrices, we introduce a parameterization of divide-and-conquer methods that is capable of representing a large class of transforms. This generic formulation can automatically learn an efficient algorithm for many important transforms; for example, it recovers the O(N log N) Cooley-Tukey FFT algorithm to machine precision, for dimensions N up to 1024. Furthermore, our method can be incorporated as a lightweight replacement of generic matrices in machine learning pipelines to learn efficient and compressible transformations. On a standard task of compressing a single hidden-layer network, our method exceeds the classification accuracy of unconstrained matrices on CIFAR-10 by 3.9 points - the first time a structured approach has done so - with 4X faster inference speed and 40X fewer parameters.
AB - Fast linear transforms are ubiquitous in machine learning, including the discrete Fourier transform, discrete cosine transform, and other structured transformations such as convolutions. All of these transforms can be represented by dense matrix-vector multiplication, yet each has a specialized and highly efficient (subquadratic) algorithm. We ask to what extent hand-crafting these algorithms and implementations is necessary, what structural priors they encode, and how much knowledge is required to automatically learn a fast algorithm for a provided structured transform. Motivated by a characterization of matrices with fast matrix-vector multiplication as factoring into products of sparse matrices, we introduce a parameterization of divide-and-conquer methods that is capable of representing a large class of transforms. This generic formulation can automatically learn an efficient algorithm for many important transforms; for example, it recovers the O(N log N) Cooley-Tukey FFT algorithm to machine precision, for dimensions N up to 1024. Furthermore, our method can be incorporated as a lightweight replacement of generic matrices in machine learning pipelines to learn efficient and compressible transformations. On a standard task of compressing a single hidden-layer network, our method exceeds the classification accuracy of unconstrained matrices on CIFAR-10 by 3.9 points - the first time a structured approach has done so - with 4X faster inference speed and 40X fewer parameters.
UR - http://www.scopus.com/inward/record.url?scp=85077989082&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85077989082&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85077989082
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 2736
EP - 2754
BT - 36th International Conference on Machine Learning, ICML 2019
PB - International Machine Learning Society (IMLS)
T2 - 36th International Conference on Machine Learning, ICML 2019
Y2 - 9 June 2019 through 15 June 2019
ER -