Question Answering

Theme leads

Bang Liu & Jian-Yun Nie

Researchers involved

Yacine Benahmed, Jackie Cheung, Samira Ebrahimi-Kahou, Philippe Langlais, Siva Reddy, Jian Tang




  • Question answering from texts and knowledge graphs

  • Passage retrieval and machine reading comprehension

  • Community QA

  • Question understanding, intent detection

  • Multimodal and multilingual QA


Search engines (or Information retrieval - IR) are part of our everyday life, allowing us to find information quickly from a large repository of texts (Web). However, they are limited to locating the documents that may contain the relevant information. In many cases, users are interested in finding the precise answer to a specific question. Question answering (QA) is a step further to fulfill this need. Compared to IR, QA requires a deeper understanding of the user’s question and document contents, as well as a more intelligent match between the question and the potential answers. Similar to human question answering, the understanding of the question and document contents requires capturing fine-grained semantics and exploiting all the available contextual information, while a more intelligent matching requires inferences based on the available knowledge. These raise several key questions which have not yet been well answered in the literature: how to represent questions and documents, and how to perform inference to determine the answers?

QA can be implemented on different sources: a knowledge graph, a large amount of texts, or a repository of answered questions (e.g. community QA). While the form of inference may be different, they share a common need of deeper and fine-grained representation and enhanced reasoning capability in question-answer matching.