Open-Domain Question Answering (Q/A) systems return a textual expression, identified from a vast document collection, as a response to a question asked in natural language. In the quest for producing accurate answers, the open-domain Q/A problem has been cast as: (1) a pipeline of linguistic processes pertaining to the processing of questions, relevant passages and candidate answers, interconnected by several types of lexicosemantic feedback; (2) a combination of language processes that transform questions and candidate answers in logic representations such that reasoning systems can select the correct answer based on their proofs; (3) a noisy-channel model which selects the most likely answer to a question; or (4) a constraint satisfaction problem, where sets of auxiliary questions are used to provide more information and better constrain the answers to individual questions. While different in their approach, each of these frameworks seeks to approximate the forms of semantic inference that will allow them to identify valid textual answers to natural language questions.
Recently, the task of automatically recognizing one form of semantic inference - textual entailment - has received much attention from groups participating in the PASCAL Recognizing Textual Entailment (RTE) Challenges. In this talk I discuss three different methods for incorporating systems for textual entailment into the traditional Q/A architecture employed by many current systems.
Back to Workshop I: Dynamic Searches and Knowledge Building