Audioquilt: 2d arrangements of audio samples using metric learning and kernelized sorting

Ohad Fried, Zeyu Jin, Adam Finkelstein, Reid Oda

Research output: Contribution to journalConference articlepeer-review

12 Scopus citations

Abstract

The modern musician enjoys access to a staggering number of audio samples. Composition software can ship with many gigabytes of data, and there are many more to be found online. However, conventional methods for navigating these libraries are still quite rudimentary, and often involve scrolling through alphabetical lists. We present AudioQuilt, a system for sample exploration that allows audio clips to be sorted according to user taste, and arranged in any desired 2D formation such that similar samples are located near each other. Our method relies on two advances in machine learning. First, metric learning allows the user to shape the audio feature space to match their own preferences. Second, kernelized sorting finds an optimal arrangement for the samples in 2D. We demonstrate our system with three new interfaces for exploring audio samples, and evaluate the technology qualitatively and quantitatively via a pair of user studies.

Original languageEnglish (US)
Pages (from-to)281-286
Number of pages6
JournalProceedings of the International Conference on New Interfaces for Musical Expression
StatePublished - 2014
Event14th International conference on New Interfaces for Musical Expression, NIME 2014 - London, United Kingdom
Duration: Jun 30 2014Jul 4 2014

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Signal Processing
  • Instrumentation
  • Music
  • Human-Computer Interaction
  • Hardware and Architecture
  • Computer Science Applications

Keywords

  • Kernelized Sorting
  • Metric Learning
  • Sample Library
  • Sound Exploration

Fingerprint

Dive into the research topics of 'Audioquilt: 2d arrangements of audio samples using metric learning and kernelized sorting'. Together they form a unique fingerprint.

Cite this