TY - GEN
T1 - Dense fast random projections and lean Walsh transforms
AU - Liberty, Edo
AU - Ailon, Nir
AU - Singer, Amit
PY - 2008
Y1 - 2008
N2 - Random projection methods give distributions over k × d matrices such that if a matrix Ψ (chosen according to the distribution) is applied to a vector x ∈ ℝd the norm of the resulting vector, Ψ x ∈ ℝk, is up to distortion ε equal to the norm of x w.p. at least 1 - δ. The Johnson Lindenstrauss lemma shows that such distributions exist over dense matrices for k (the target dimension) in O(log(1/δ)/fε2). Ailon and Chazelle and later Matousek showed that there exist entry-wise i.i.d. distributions over sparse matrices Ψ which give the same guaranties for vectors whose ℓ∞ is bounded away from their ℓ2 norm. This allows to accelerate the mapping x → Ψx. We claim that setting Ψ as any column normalized deterministic dense matrix composed with random ±1 diagonal matrix also exhibits this property for vectors whose ℓp (for any p > 2) is bounded away from their ℓ2 norm. We also describe a specific tensor product matrix which we term lean Walsh. It is applicable to any vector in in O(d) operations and requires a weaker ℓ∞ bound on x then the best current result, under comparable running times, using sparse matrices due to Matousek.
AB - Random projection methods give distributions over k × d matrices such that if a matrix Ψ (chosen according to the distribution) is applied to a vector x ∈ ℝd the norm of the resulting vector, Ψ x ∈ ℝk, is up to distortion ε equal to the norm of x w.p. at least 1 - δ. The Johnson Lindenstrauss lemma shows that such distributions exist over dense matrices for k (the target dimension) in O(log(1/δ)/fε2). Ailon and Chazelle and later Matousek showed that there exist entry-wise i.i.d. distributions over sparse matrices Ψ which give the same guaranties for vectors whose ℓ∞ is bounded away from their ℓ2 norm. This allows to accelerate the mapping x → Ψx. We claim that setting Ψ as any column normalized deterministic dense matrix composed with random ±1 diagonal matrix also exhibits this property for vectors whose ℓp (for any p > 2) is bounded away from their ℓ2 norm. We also describe a specific tensor product matrix which we term lean Walsh. It is applicable to any vector in in O(d) operations and requires a weaker ℓ∞ bound on x then the best current result, under comparable running times, using sparse matrices due to Matousek.
KW - Dimension reduction
KW - Johnson Lindenstrauss
KW - Random projections
KW - lean Walsh transforms
UR - http://www.scopus.com/inward/record.url?scp=51849136504&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=51849136504&partnerID=8YFLogxK
U2 - 10.1007/978-3-540-85363-3_40
DO - 10.1007/978-3-540-85363-3_40
M3 - Conference contribution
AN - SCOPUS:51849136504
SN - 3540853626
SN - 9783540853626
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 512
EP - 522
BT - Approximation, Randomization and Combinatorial Optimization
T2 - 11th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2008 and 12th International Workshop on Randomization and Computation, RANDOM 2008
Y2 - 25 August 2008 through 27 August 2008
ER -