Online multitask learning

Ofer Dekel, Philip M. Long, Yoram Singer

Research output: Chapter in Book/Report/Conference proceedingConference contribution

32 Scopus citations

Abstract

We study the problem of online learning of multiple tasks in parallel. On each online round, the algorithm receives an instance and makes a prediction for each one of the parallel tasks. We consider the case where these tasks all contribute toward a common goal. We capture the relationship between the tasks by using a single global loss function to evaluate the quality of the multiple predictions made on each round. Specifically, each individual prediction is associated with its own individual loss, and then these loss values are combined using a global loss function. We present several families of online algorithms which can use any absolute norm as a global loss function. We prove worst-case relative loss bounds for all of our algorithms.

Original languageEnglish (US)
Title of host publicationLearning Theory - 19th Annual Conference on Learning Theory, COLT 2006, Proceedings
PublisherSpringer Verlag
Pages453-467
Number of pages15
ISBN (Print)3540352945, 9783540352945
DOIs
StatePublished - 2006
Event19th Annual Conference on Learning Theory, COLT 2006 - Pittsburgh, PA, United States
Duration: Jun 22 2006Jun 25 2006

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume4005 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Other

Other19th Annual Conference on Learning Theory, COLT 2006
Country/TerritoryUnited States
CityPittsburgh, PA
Period6/22/066/25/06

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Online multitask learning'. Together they form a unique fingerprint.

Cite this