TY - CONF
T1 - Self-paced multi-task learning
AU - Li, Changsheng
AU - Yan, Junchi
AU - Wei, Fan
AU - Dong, Weishan
AU - Liu, Qingshan
AU - Zha, Hongyuan
N1 - Funding Information:
The work was supported by the IBM Shared Unison Research Program 2015-2016, Natural Science Foundation of China (Grant No. 61532009, 61602176, 61672231), China Postdoctoral Science Foundation Funded Project (Grant No. 2016M590337), the Funding of Jiangsu Province (Grant No. 15KJA520001) and NSF (IIS-1639792, DMS-1620345). We sincerely thank Dr. Xiangfeng Wang for his valuable suggestions to improve this work.
Publisher Copyright:
Copyright © 2017, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2017
Y1 - 2017
N2 - Multi-task learning is a paradigm, where multiple tasks are jointly learnt. Previous multi-task learning models usually treat all tasks and instances per task equally during learning. Inspired by the fact that humans often learn from easy concepts to hard ones in the cognitive process, in this paper, we propose a novel multi-task learning framework that attempts to learn the tasks by simultaneously taking into consideration the complexities of both tasks and instances per task. We propose a novel formulation by presenting a new task-oriented regularizer that can jointly prioritize tasks and instances. Thus it can be interpreted as a self-paced learner for multi-task learning. An efficient block coordinate descent algorithm is developed to solve the proposed objective function, and the convergence of the algorithm can be guaranteed. Experimental results on the toy and real-world datasets demonstrate the effectiveness of the proposed approach, compared to the state-of-the-arts.
AB - Multi-task learning is a paradigm, where multiple tasks are jointly learnt. Previous multi-task learning models usually treat all tasks and instances per task equally during learning. Inspired by the fact that humans often learn from easy concepts to hard ones in the cognitive process, in this paper, we propose a novel multi-task learning framework that attempts to learn the tasks by simultaneously taking into consideration the complexities of both tasks and instances per task. We propose a novel formulation by presenting a new task-oriented regularizer that can jointly prioritize tasks and instances. Thus it can be interpreted as a self-paced learner for multi-task learning. An efficient block coordinate descent algorithm is developed to solve the proposed objective function, and the convergence of the algorithm can be guaranteed. Experimental results on the toy and real-world datasets demonstrate the effectiveness of the proposed approach, compared to the state-of-the-arts.
UR - http://www.scopus.com/inward/record.url?scp=85026778902&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85026778902&partnerID=8YFLogxK
M3 - Paper
AN - SCOPUS:85026778902
SP - 2175
EP - 2181
T2 - 31st AAAI Conference on Artificial Intelligence, AAAI 2017
Y2 - 4 February 2017 through 10 February 2017
ER -