Pseudospectral shattering, the sign function, and diagonalization in nearly matrix multiplication time

Research output: Chapter in Book/Report/Conference proceedingConference contribution

21 Scopus citations

Abstract

We exhibit a randomized algorithm which given a square matrix A in mathbb{C}{n times n} with Vert A Vert leq 1 and delta > 0, computes with high probability an invertible V and diagonal D such that begin{equation} Vert A-VDV{-1} Vert leq delta end{equation} in O(T {text{MM}}(n) log{2}(n delta)) arithmetic operations on a floating point machine with O(log{4}(n delta) log n) bits of precision. The computed similarity V additionally satisfies Vert V Vert Vert V{-1} Vert leq O(n{2.5} delta). Here T {text{MM}}(n) is the number of arithmetic operations required to multiply two n times n complex matrices numerically stably, known to satisfy T {text{MM}}(n)=O(n{omega+ eta}) for every eta > 0 where omega is the exponent of matrix multiplication [1]. The algorithm is a variant of the spectral bisection algorithm in numerical linear algebra [2] with a crucial Gaussian perturbation preprocessing step. Our running time is optimal up to polylogarithmic factors, in the sense that verifying that a given similarity diagonalizes a matrix requires at least matrix multiplication time. It significantly improves the previously best known provable running times of O(n{10} delta{2}) arithmetic operations for diagonalization of general matrices [3], and (with regards to the dependence on n) O(n{3}) arithmetic operations for Hermitian matrices [4], and is the first algorithm to achieve nearly matrix multiplication time for diagonalization in any model of computation (real arithmetic, rational arithmetic, or finite arithmetic). The proof rests on two new ingredients. (1) We show that adding a small complex Gaussian perturbation to any matrix splits its pseudospectrum into n small well-separated components. In particular, this implies that the eigenvalues of the perturbed matrix have a large minimum gap, a property of independent interest in random matrix theory. (2) We give a rigorous analysis of Roberts' [5] Newton iteration method for computing the sign function of a matrix in finite arithmetic, itself an open problem in numerical analysis since at least 1986 [6]. This is achieved by controlling the evolution of the pseudospectra of the iterates using a carefully chosen sequence of shrinking contour integrals in the complex plane.

Original languageEnglish (US)
Title of host publicationProceedings - 2020 IEEE 61st Annual Symposium on Foundations of Computer Science, FOCS 2020
PublisherIEEE Computer Society
Pages529-540
Number of pages12
ISBN (Electronic)9781728196213
DOIs
StatePublished - Nov 2020
Externally publishedYes
Event61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020 - Virtual, Durham, United States
Duration: Nov 16 2020Nov 19 2020

Publication series

NameProceedings - Annual IEEE Symposium on Foundations of Computer Science, FOCS
Volume2020-November
ISSN (Print)0272-5428

Conference

Conference61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020
Country/TerritoryUnited States
CityVirtual, Durham
Period11/16/2011/19/20

All Science Journal Classification (ASJC) codes

  • General Computer Science

Keywords

  • Numerical Analysis
  • Random Matrix Theory

Fingerprint

Dive into the research topics of 'Pseudospectral shattering, the sign function, and diagonalization in nearly matrix multiplication time'. Together they form a unique fingerprint.

Cite this