Patent application number | Description | Published |
20100070276 | METHOD AND APPARATUS FOR INTERACTION OR DISCOURSE ANALYTICS - A method and apparatus for analyzing and segmenting a vocal interaction captured in a test audio source, the test audio source captured within an environment. The method and apparatus first use text and acoustic features extracted from the interaction with tagging information, for constructing a model. Then, at production time, text and acoustic features are extracted from the interactions, and by applying the model, tagging information is retrieved for the interaction, enabling analysis, flow visualization or further processing of the interaction. | 03-18-2010 |
20100088323 | METHOD AND APPARATUS FOR VISUALIZATION OF INTERACTION CATEGORIZATION - A method and apparatus for visualization of call categorization, comprising steps and components for: defining or receiving a definition for one or more categories and criteria for each category; receiving or capturing interactions; categorizing the interactions into the categories; determining relations between the categories; determining layout for the categories and relations; and visualizing the layout. The method and apparatus can further comprise steps and components for extracting key-phrases, determining connections between key-phrases, connections between categories based on key-phrases, and connections between categories and key-phrases, and visualizing the categories, key-phrases and connections. The method and apparatus can further comprise steps and components for training models upon which the relations between categories and relations between key-phrases are determined. | 04-08-2010 |
20100106499 | METHODS AND APPARATUS FOR LANGUAGE IDENTIFICATION - In a multi-lingual environment, a method and apparatus for determining a language spoken in a speech utterance. The method and apparatus test acoustic feature vectors extracted from the utterances against acoustic models associated with one or more of the languages. Speech to text is then performed for the language indicated by the acoustic testing, followed by textual verification of the resulting text. During verification, the resulting text is processed by language specific NLP and verified against textual models associated with the language. The system is self-learning, i.e., once a language is verified or rejected, the relevant feature vectors are used for enhancing one or more acoustic models associated with one or more languages, so that acoustic determination may improve. | 04-29-2010 |
20100161604 | APPARATUS AND METHOD FOR MULTIMEDIA CONTENT BASED MANIPULATION - An apparatus and methods for generating an ontology for a domain based on analysis performed on interactions captured in the domain. The analysis provides groups of concepts which are used as topics appearing or retrieved from the interactions are used as topics or concepts in the ontology. Concepts belonging to one group are indicated as connected within the ontology. The ontology can then be used in analyzing further interactions and provide meaning, content and relationships between concepts. | 06-24-2010 |
20100246799 | METHODS AND APPARATUS FOR DEEP INTERACTION ANALYSIS - A method and apparatus for automatically sectioning an interaction into sections, in order to get more insight into interactions. The method and apparatus include training, in which a model is generated upon training interactions and available tagging information, and run-time in which the model is used towards sectioning further interactions. The method and apparatus operate on context units within the interaction, wherein each context unit is characterized by a feature vector relate to textual, acoustic or other characteristics of the context unit. | 09-30-2010 |
20110208522 | METHOD AND APPARATUS FOR DETECTION OF SENTIMENT IN AUTOMATED TRANSCRIPTIONS - A method and apparatus for automatically detecting sentiment in interactions. The method and apparatus include training, in which a model is generated upon features extracted from training interactions and tagging information. and run-time in which the model is used towards detecting sentiment in further interactions. | 08-25-2011 |
20120209605 | METHOD AND APPARATUS FOR DATA EXPLORATION OF INTERACTIONS - Retrieving data from audio interactions associated with an organization. Retrieving the data comprises: receiving a corpus containing interactions; performing natural language processing on a text document representing an interaction from the corpus; extracting at least one keyphrase from the text document; assigning a rank to the at least one keyphrase; modeling relations between at least two keyphrases using the rank; and identifying topics relevant for the organization from the relations. | 08-16-2012 |
20120209606 | METHOD AND APPARATUS FOR INFORMATION EXTRACTION FROM INTERACTIONS - Obtaining information from audio interactions associated with an organization. The information may comprise entities, relations or events. The method comprises: receiving a corpus comprising audio interactions; performing audio analysis on audio interactions of the corpus to obtain text documents; performing linguistic analysis of the text documents; matching the text documents with one or more rules to obtain one or more matches; and unifying or filtering the matches. | 08-16-2012 |
20130060769 | SYSTEM AND METHOD FOR IDENTIFYING SOCIAL MEDIA INTERACTIONS - A system and method for searching data, such as, text data, using a processing component. A query including one or more terms may be received. At least one term may be automatically added to the query to generate an expanded query set. Entries from one or more information sources, such as, Internet posts, may be retrieved. The retrieved entries may include terms that match terms in the expanded query set. The relevancy of each retrieved entry to the query may be automatically determined. A search result may be provided including a subset of the retrieved entries that are determined to have sufficient relevancy to the query. An output device may display the search result to a client or user. | 03-07-2013 |
20130246064 | SYSTEM AND METHOD FOR REAL-TIME SPEAKER SEGMENTATION OF AUDIO INTERACTIONS - A system and method for real-time processing a signal of a voice interaction. In an embodiment, a digital representation of a portion of an interaction may be analyzed in real-time and a segment may be selected. The segment may be associated with a source based on a model of the source. The model may updated based on the segment. The updated model is used to associate subsequent segments with the source. Other embodiments are described and claimed. | 09-19-2013 |
20130262106 | METHOD AND SYSTEM FOR AUTOMATIC DOMAIN ADAPTATION IN SPEECH RECOGNITION APPLICATIONS - A system and method for adapting a language model to a specific environment by receiving interactions captured the specific environment, generating a collection of documents from documents retrieved from external resources, detecting in the collection of documents terms related to the environment that are not included in an initial language model and adapting the initial language model to include the terms detected. | 10-03-2013 |
20140025376 | METHOD AND APPARATUS FOR REAL TIME SALES OPTIMIZATION BASED ON AUDIO INTERACTIONS ANALYSIS - The subject matter discloses a computerized method for sales optimization comprising: receiving at a computer server a digital representation of a portion of an interaction between a customer and an organization representative, the portion of an interaction comprises a speech signal of the customer and a speech signal of the organization representative; analyzing the speech signal of the organization representative; analyzing the speech signal of the customer; determining a distance vector between the speech signal of the organization representative and the speech signal of the customer; and predicting a sale success probability score for the captured speech signal portion. | 01-23-2014 |
20140067373 | METHOD AND APPARATUS FOR ENHANCED PHONETIC INDEXING AND SEARCH - The subject matter discloses a method two phase phonetic indexing and search comprising: receiving a digital representation of an audio signal; producing a phonetic index of the audio signal; producing phonetic N-gram sequence from the phonetic index by segmenting the phonetic index into a plurality of phonetic N-grams; and producing an inverted index of the plurality of phonetic N-grams. | 03-06-2014 |
20140172859 | METHOD AND APPARATUS FOR TRADE INTERACTION CHAIN RECONSTRUCTION - The subject matter discloses a method for trade interaction chain reconstruction comprising: identifying a swap deal, the swap deal includes two or more of the received interactions and involves two or more participants; selecting a first interaction of the received interactions, said first interaction involves at least two participants of the two or more participants, said first interaction is stored on a computerized device; obtaining a first plurality of interactions of the received interactions that involve the at least two participants of the two or more participants; determining a first plurality of relevance scores between the first plurality of interactions and the first interaction; and associating interactions of the first plurality of interactions to be relevant to the swap deal according to the determined first plurality of relevance scores. | 06-19-2014 |
20140257820 | METHOD AND APPARATUS FOR REAL TIME EMOTION DETECTION IN AUDIO INTERACTIONS - The subject matter discloses a computerized method for real time emotion detection in audio interactions comprising: receiving at a computer server a portion of an audio interaction between a customer and an organization representative, the portion of the audio interaction comprises a speech signal; extracting feature vectors from the speech signal; obtaining a statistical model; producing adapted statistical data by adapting the statistical model according to the speech signal using the feature vectors extracted from the speech signal; obtaining an emotion classification model; and producing an emotion score based on the adapted statistical data and the emotion classification model, said emotion score represents the probability that the speaker that produced the speech signal is in an emotional state. | 09-11-2014 |