Patent application number | Description | Published |
20120078873 | USING ONTOLOGICAL INFORMATION IN OPEN DOMAIN TYPE COERCION - A computer-implemented system, method and program product generates answers to questions in an input query text string. The method includes determining, by a programmed processor unit, a lexical answer type (LAT) string associated with an input query; automatically obtaining a candidate answer string to the input query from a data corpus; mapping the query LAT string to a first type string in a structured resource; mapping the candidate answer string to a second type string in the structured resource; and determining if the first type string and the second type string are disjointed; and scoring the candidate answer string based on the determination of the types being disjointed wherein the structured resource includes a semantic database providing ontological content. | 03-29-2012 |
20120078888 | PROVIDING ANSWERS TO QUESTIONS USING LOGICAL SYNTHESIS OF CANDIDATE ANSWERS - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving an input query, decomposing the input query into a plurality of different subqueries, and conducting a search in one or more data sources to identify at least one candidate answer to each of the subqueries. A ranking function is applied to each of the candidate answers to determine a ranking for each of these candidate answers; and for each of the subqueries, one of the candidate answers to the subquery is selected based on this ranking. A logical synthesis component is applied to synthesize a candidate answer for the input query from the selected the candidate answers to the subqueries. In one embodiment, the procedure applied by the logical synthesis component to synthesize the candidate answer for the input query is determined from the input query. | 03-29-2012 |
20120078889 | PROVIDING ANSWERS TO QUESTIONS USING HYPOTHESIS PRUNING - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving a query, conducting a search through one or more data sources to identify candidate answers to the query, and providing each of the candidate answers with a preliminary score. The method further comprises filtering out any of the candidate answers with a preliminary score that does not satisfy a defined condition. The candidate answers having preliminary scores that satisfy this condition form a subset of the candidate answers. Each of the candidate answers in this subset is processed to produce further scores. A ranking function is applied to these further scores to determine a ranking for each of the candidate answers in the subset; and after this ranking function is applied, one or more of the candidate answers are selected as one or more final answers to the query. | 03-29-2012 |
20120078891 | PROVIDING ANSWERS TO QUESTIONS USING MULTIPLE MODELS TO SCORE CANDIDATE ANSWERS - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving an input query; conducting a search to identify candidate answers to the input query, and producing a plurality of scores for each of the candidate answers. For each of the candidate answers, one, of a plurality of candidate ranking functions, is selected. This selected ranking function is applied to the each of the candidate answers to determine a ranking for the candidate answer based on the scores for that candidate answer. One or more of the candidate answers is selected, based on the rankings for the candidate answers, as one or more answers to the input query. In an embodiment, the ranking function selection is performed using information about the question. In an embodiment, the ranking function selection is performed using information about each answer. | 03-29-2012 |
20120078902 | PROVIDING QUESTION AND ANSWERS WITH DEFERRED TYPE EVALUATION USING TEXT WITH LIMITED STRUCTURE - A system, method and computer program product for conducting questions and answers with deferred type evaluation based on any corpus of data. The method includes processing a query including waiting until a “Type” (i.e. a descriptor) is determined AND a candidate answer is provided. Then, a search is conducted to look (search) for evidence that the candidate answer has the required Lexical Answer Type (e.g., as determined by a matching function that can leverage a parser, a semantic interpreter and/or a simple pattern matcher). Prior to or during candidate answer evaluation, a process is provided for extracting and storing collections of entity-type pairs from semi-structured text documents. During QA processing and candidate answer scoring, a process is implemented to match the query LAT against the lexical type of each provided candidate answer and generate a score judging a degree of match. | 03-29-2012 |
20120084293 | PROVIDING ANSWERS TO QUESTIONS INCLUDING ASSEMBLING ANSWERS FROM MULTIPLE DOCUMENT SEGMENTS - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving an input query, identifying a plurality of candidate answers to the query; and for at least one of these candidate answers, identifying at least one proof of the answer. This proof includes a series of premises, and a multitude of documents are identified that include references to the premises. A set of these documents is selected that include references to all of the premises. This set of documents is used to generate one or more scores for the one of the candidate answers. A defined procedure is applied to the candidate answers to determine a ranking for the answers, and this includes using the one or more scores for the at least one of the candidate answers in the defined procedure to determine the ranking for this one candidate answer. | 04-05-2012 |
20120131016 | EVIDENCE PROFILING - Evidence profiling, in one aspect, may receive a candidate answer and supporting pieces of evidence. An evidence profile may be generated, the evidence profile communicating a degree to which the evidence supports the candidate answer as being correct. The evidence profile may provide dimensions of evidence, and each dimension may support or refute the candidate answer as being correct. | 05-24-2012 |
20120330921 | USING ONTOLOGICAL INFORMATION IN OPEN DOMAIN TYPE COERCION - A computer-implemented system, method and program product generates answers to questions in an input query text string. The method includes determining, by a programmed processor unit, a lexical answer type (LAT) string associated with an input query; automatically obtaining a candidate answer string to the input query from a data corpus; mapping the query LAT string to a first type string in a structured resource; mapping the candidate answer string to a second type string in the structured resource; and determining if the first type string and the second type string are disjointed; and scoring the candidate answer string based on the determination of the types being disjointed wherein the structured resource includes a semantic database providing ontological content. | 12-27-2012 |
20120330934 | PROVIDING QUESTION AND ANSWERS WITH DEFERRED TYPE EVALUATION USING TEXT WITH LIMITED STRUCTURE - A system, method and computer program product for conducting questions and answers with deferred type evaluation based on any corpus of data. The method includes processing a query including waiting until a “Type” (i.e. a descriptor) is determined AND a candidate answer is provided. Then, a search is conducted to look (search) for evidence that the candidate answer has the required Lexical Answer Type (e.g., as determined by a matching function that can leverage a parser, a semantic interpreter and/or a simple pattern matcher). Prior to or during candidate answer evaluation, a process is provided for extracting and storing collections of entity-type pairs from semi-structured text documents. During QA processing and candidate answer scoring, a process is implemented to match the query LAT against the lexical type of each provided candidate answer and generate a score judging a degree of match. | 12-27-2012 |
20130006641 | PROVIDING ANSWERS TO QUESTIONS USING LOGICAL SYNTHESIS OF CANDIDATE ANSWERS - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving an input query, decomposing the input query into a plurality of different subqueries, and conducting a search in one or more data sources to identify at least one candidate answer to each of the subqueries. A ranking function is applied to each of the candidate answers to determine a ranking for each of these candidate answers; and for each of the subqueries, one of the candidate answers to the subquery is selected based on this ranking. A logical synthesis component is applied to synthesize a candidate answer for the input query from the selected the candidate answers to the subqueries. In one embodiment, the procedure applied by the logical synthesis component to synthesize the candidate answer for the input query is determined from the input query. | 01-03-2013 |
20130007055 | PROVIDING ANSWERS TO QUESTIONS USING MULTIPLE MODELS TO SCORE CANDIDATE ANSWERS - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving an input query; conducting a search to identify candidate answers to the input query, and producing a plurality of scores for each of the candidate answers. For each of the candidate answers, one, of a plurality of candidate ranking functions, is selected. This selected ranking function is applied to the each of the candidate answers to determine a ranking for the candidate answer based on the scores for that candidate answer. One or more of the candidate answers is selected, based on the rankings for the candidate answers, as one or more answers to the input query. In an embodiment, the ranking function selection is performed using information about the question. In an embodiment, the ranking function selection is performed using information about each answer. | 01-03-2013 |
20130013547 | EVIDENCE PROFILING - Evidence profiling, in one aspect, may receive a candidate answer and supporting pieces of evidence. An evidence profile may be generated, the evidence profile communicating a degree to which the evidence supports the candidate answer as being correct. The evidence profile may provide dimensions of evidence, and each dimension may support or refute the candidate answer as being correct. | 01-10-2013 |
20130013615 | PROVIDING ANSWERS TO QUESTIONS INCLUDING ASSEMBLING ANSWERS FROM MULTIPLE DOCUMENT SEGMENTS - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving an input query, identifying a plurality of candidate answers to the query; and for at least one of these candidate answers, identifying at least one proof of the answer. This proof includes a series of premises, and a multitude of documents are identified that include references to the premises. A set of these documents is selected that include references to all of the premises. This set of documents is used to generate one or more scores for the one of the candidate answers. A defined procedure is applied to the candidate answers to determine a ranking for the answers, and this includes using the one or more scores for the at least one of the candidate answers in the defined procedure to determine the ranking for this one candidate answer. | 01-10-2013 |
20130017523 | UTILIZING FAILURES IN QUESTION AND ANSWER SYSTEM RESPONSES TO ENHANCE THE ACCURACY OF QUESTION AND ANSWER SYSTEMS - A method of enhancing the accuracy of a question-answer system. Missing information from a corpus of data is identified. The missing information is any information that improves a confidence for a candidate answer to a question. A follow-on inquiry is generated. The follow-on inquiry prompts for the missing information to be provided. The follow-on inquiry is output to an external source. A response to the follow-on inquiry is received from the external source. The response is added to the corpus of data. | 01-17-2013 |
20130017524 | UTILIZING FAILURES IN QUESTION AND ANSWER SYSTEM RESPONSES TO ENHANCE THE ACCURACY OF QUESTION AND ANSWER SYSTEMS - A computerized device for enhancing the accuracy of a question-answer system. The computerized device comprises a question-answer system comprising software for performing a plurality of question answering processes. A receiver receives a question into the question-answer system. A processor that generates a plurality of candidate answers to the question is connected to the question-answer system. The processor determines a confidence score for each of the plurality of candidate answers. The processor evaluates sources of evidence used to generate the plurality of candidate answers. The processor identifies missing information from a corpus of data. The missing information comprises any information that improves a confidence score for a candidate answer. The processor generates at least one follow-on inquiry based on the missing information. A network interface outputs the at least one follow-on inquiry to external sources separate from the question-answer system. | 01-17-2013 |
20130018876 | PROVIDING ANSWERS TO QUESTIONS USING HYPOTHESIS PRUNING - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving a query, conducting a search through one or more data sources to identify candidate answers to the query, and providing each of the candidate answers with a preliminary score. The method further comprises filtering out any of the candidate answers with a preliminary score that does not satisfy a defined condition. The candidate answers having preliminary scores that satisfy this condition form a subset of the candidate answers. Each of the candidate answers in this subset is processed to produce further scores. A ranking function is applied to these further scores to determine a ranking for each of the candidate answers in the subset; and after this ranking function is applied, one or more of the candidate answers are selected as one or more final answers to the query. | 01-17-2013 |
20130019285 | VALIDATING THAT A USER IS HUMAN - A method of validating that a user is human. A user question is generated using a computerized device. The user question is output to a user. A user response to the user question is received from the user. The user response is validated as having been provided by a human. | 01-17-2013 |
20130019286 | VALIDATING THAT A USER IS HUMAN - A method of validating that a user is human. A user question is generated using a computerized device. The user question is output to a user. A user response to the user question is received from the user. The user response is validated as having been provided by a human. | 01-17-2013 |
20140072947 | GENERATING SECONDARY QUESTIONS IN AN INTROSPECTIVE QUESTION ANSWERING SYSTEM - A method of generating secondary questions in a question-answer system. Missing information is identified from a corpus of data using a computerized device. The missing information comprises any information that improves confidence scores for candidate answers to a question. The computerized device automatically generates a plurality of hypotheses concerning the missing information. The computerized device automatically generates at least one secondary question based on each of the plurality of hypotheses. The hypotheses are ranked based on relative utility to determine an order in which the computerized device outputs the at least one secondary question to external sources to obtain responses. | 03-13-2014 |
20140072948 | GENERATING SECONDARY QUESTIONS IN AN INTROSPECTIVE QUESTION ANSWERING SYSTEM - A method of generating secondary questions in a question-answer system. Missing information is identified from a corpus of data using a computerized device. The missing information comprises any information that improves confidence scores for candidate answers to a question. The computerized device automatically generates a plurality of hypotheses concerning the missing information. The computerized device automatically generates at least one secondary question based on each of the plurality of hypotheses. The hypotheses are ranked based on relative utility to determine an order in which the computerized device outputs the at least one secondary question to external sources to obtain responses. | 03-13-2014 |
20140141399 | MULTI-DIMENSIONAL FEATURE MERGING FOR OPEN DOMAIN QUESTION ANSWERING - Methods/systems receive a question and automatically search sources of data containing passages to produce candidate answers to the question. The searching identifies passages that support each of the candidate answers based on scoring features that indicate whether the candidate answers are correct answers to the question. These methods/systems automatically create a scoring feature-specific matrix for each scoring feature. Each scoring feature-specific matrix has a score field for each different combination of text passage and question term (vector), and each score field holds a score value (vector value) indicating how each different combination of text passage and question term supports the candidate answers as being a correct answer to the question. Next, such methods/systems automatically combine multiple such vectors to produce a combined vector score for each of the candidate answers, and then rank the candidate answers based on the combined scores. | 05-22-2014 |
20140141401 | MULTI-DIMENSIONAL FEATURE MERGING FOR OPEN DOMAIN QUESTION ANSWERING - Methods/systems receive a question and automatically search sources of data containing passages to produce candidate answers to the question. The searching identifies passages that support each of the candidate answers based on scoring features that indicate whether the candidate answers are correct answers to the question. These methods/systems automatically create a scoring feature-specific matrix for each scoring feature. Each scoring feature-specific matrix has a score field for each different combination of text passage and question term (vector), and each score field holds a score value (vector value) indicating how each different combination of text passage and question term supports the candidate answers as being a correct answer to the question. Next, such methods/systems automatically combine multiple such vectors to produce a combined vector score for each of the candidate answers, and then rank the candidate answers based on the combined scores. | 05-22-2014 |
20140272904 | COMBINING DIFFERENT TYPE COERCION COMPONENTS FOR DEFERRED TYPE EVALUATION - In a method of answering questions, a question is received, a question LAT is determined, and a candidate answer to the question is identified. Preliminary types for the candidate answer are determined using first components to produce the preliminary types. Each of the first components produces a preliminary type using different methods. A first type-score representing a degree of match between the preliminary type and the question LAT is produced. Each preliminary type and each first type-score is evaluated using second components. Each of the second components produces a second score based on a combination of the first type-score and a measure of degree that the preliminary type matches the question LAT. The second components use different methods to produce the second score. A final score representing a degree of confidence that the candidate answer matches the question LAT is calculated based on the second score. | 09-18-2014 |
20140337329 | PROVIDING ANSWERS TO QUESTIONS USING MULTIPLE MODELS TO SCORE CANDIDATE ANSWERS - A method, system and computer program product for generating answers to questions. In one embodiment, the method comprises receiving an input query; conducting a search to identify candidate answers to the input query, and producing a plurality of scores for each of the candidate answers. For each of the candidate answers, one, of a plurality of candidate ranking functions, is selected. This selected ranking function is applied to the each of the candidate answers to determine a ranking for the candidate answer based on the scores for that candidate answer. One or more of the candidate answers is selected, based on the rankings for the candidate answers, as one or more answers to the input query. In an embodiment, the ranking function selection is performed using information about the question. In an embodiment, the ranking function selection is performed using information about each answer. | 11-13-2014 |