Abstract
We consider extractive summarization within a cluster of related texts (multi-document summarization). Unlike single-document summarization, redundancy is particularly important because sentences across related documents might convey overlapping information. Thus, sentence extraction in such a setting is difficult because one will need to determine which pieces of information are relevant while avoiding unnecessary repetitiveness. To solve this difficult problem, we propose a novel reinforcement learning based method <inline-formula><tex-math notation="LaTeX">$\mathbf{PoBRL}$</tex-math></inline-formula> (<bold>Po</bold>licy <bold>B</bold>lending with maximal marginal relevance and <bold>R</bold>einforcement <bold>L</bold>earning) for solving multi-document summarization. PoBRL jointly optimizes over the following objectives necessary for a high-quality summary: importance, relevance, and length. Our strategy decouples this multi-objective optimization into different sub-problems that can be solved individually by reinforcement learning. Utilizing PoBRL, we then blend each learned policies to produce a summary that is a concise and a complete representation of the original input. Our empirical analysis shows high performance on several multi-document datasets. Human evaluation also shows that our method produces high-quality output.
Original language | English (US) |
---|---|
Pages (from-to) | 1-13 |
Number of pages | 13 |
Journal | IEEE Transactions on Artificial Intelligence |
DOIs | |
State | Accepted/In press - 2022 |
All Science Journal Classification (ASJC) codes
- Artificial Intelligence
- Computer Science Applications
Keywords
- Artificial intelligence
- Artificial intelligence
- Data mining
- Deep learning
- Deep reinforcement learning
- Document summarization
- Electronic mail
- Iterative algorithms
- Machine Learning
- Natural language processing
- Optimization
- Redundancy
- Reinforcement learning
- Reinforcement learning