CoQA: A Conversational Question Answering Challenge

Siva Reddy, Danqi Chen, Christopher D. Manning

Research output: Contribution to journalArticlepeer-review

659 Scopus citations

Abstract

Humans gather information through conversations involving a series of interconnected questions and answers. For machines to assist in information gathering, it is therefore essential to enable them to answer conversational questions. We introduce CoQA, a novel dataset for building Conversational Question Answering systems. Our dataset contains 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains. The questions are conversational, and the answers are free-form text with their corresponding evidence highlighted in the passage. We analyze CoQA in depth and show that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning). We evaluate strong dialogue and reading comprehension models on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating that there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp.github. io/coqa.

Original languageEnglish (US)
Pages (from-to)249-266
Number of pages18
JournalTransactions of the Association for Computational Linguistics
Volume7
DOIs
StatePublished - May 1 2019
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Communication
  • Human-Computer Interaction
  • Linguistics and Language
  • Computer Science Applications
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'CoQA: A Conversational Question Answering Challenge'. Together they form a unique fingerprint.

Cite this