Patent application number | Description | Published |
20090210229 | Processing Received Voice Messages - A voice message processing system shortens received voice messages to reduce the time a user must spend in reviewing the user's voice messages. In some embodiments, a data file associated with a caller is created and updated with words and associated audio files that may be used to replace longer words or phrases in future voice messages from the caller. A user may manually configure preferences to aggressively shorten messages in some embodiments. A speech synthesizer may be employed to replace text in messages when sufficient audio files are not stored to provide sufficient processing of messages. An audible indicator may be played with a revised message to allow a user to play back at least a portion of the original, received message without the substituted portions. Such systems provide a user the opportunity to review messages in a reduced time. | 08-20-2009 |
20090276802 | AVATARS IN SOCIAL INTERACTIVE TELEVISION - Virtual environments are presented on displays along with multimedia programs to permit viewers to participate in a social interactive television environment. The virtual environments include avatars that are created and maintained in part using continually updated animation data that may be captured from cameras that monitor viewing areas in a plurality of sites. User input from the viewers may be processed in determining which viewers are presented in instances of the virtual environment. Continually updating the animation data results in avatars accurately depicting a viewer's facial expressions and other characteristics. Presence data may be collected and used to determine when to capture background images from a viewing area that may later be subtracted during the capture of animation data. Speech recognition technology may be employed to provide callouts within a virtual environment. | 11-05-2009 |
20090276820 | DYNAMIC SYNCHRONIZATION OF MULTIPLE MEDIA STREAMS - A disclosed method for synchronizing different streams of a multimedia content program includes providing the multimedia content program to a first viewer via a first multimedia stream in response to receiving a first request to view the multimedia content program from the first viewer and providing the multimedia content program to a second viewer via a second multimedia stream in response to a second request from the second viewer. The method includes determining a temporal or synchronization difference that indicates a temporal relationship between the first and second streams. A timing of at least one of the streams is altered to reduce the synchronization difference. When the synchronization difference drops below a specified threshold, the multimedia content program may be provided to the first and second viewers via a multimedia stream that is common to the first and second viewers. | 11-05-2009 |
20090276821 | DYNAMIC SYNCHRONIZATION OF MEDIA STREAMS WITHIN A SOCIAL NETWORK - A method of synchronizing first and second streams of a multimedia content program is operable for determining a temporal difference indicative of a relative timing between first and second streams of the program, the first stream being provided to a first multimedia processing resource (MPR) and the second stream being provided to a second MPR. The method includes manipulating at least one of the streams to reduce the temporal difference until the temporal difference is less than a predetermined threshold and enabling a viewer of the first stream to interact with a viewer of the second stream regarding the program. Interactions are visually detectable on a first display screen corresponding to the first MPR. | 11-05-2009 |
20090319884 | ANNOTATION BASED NAVIGATION OF MULTIMEDIA CONTENT - A disclosed service for enabling enhanced navigation of a program of multimedia content includes enabling a user to access annotation data associated with the program. The annotation data is indicative of a plurality of chronologically ordered annotations generated by one or more viewers of the program. The chronological positioning of an annotation within the program is indicative of the portion of the program being watched when the annotation was created. In other words, the annotations occur at locations in the program when they are created. If a first user creates a first annotation at the seven minute mark of a program, a second user, who watches the program while accessing the stored annotation data, will see the first user's annotation at the seven minute mark of the program. | 12-24-2009 |
20090319885 | COLLABORATIVE ANNOTATION OF MULTIMEDIA CONTENT - A method for collaborative annotating of a program of multimedia content includes enabling a first user to create a program annotation, enabling the first user to store annotation data, and enabling a second user to access the annotation data. The second user may navigate the program using the annotation and/or view the annotation while viewing the program. The first user may create the annotation while viewing the program, for example, by asserting an annotation button on a remote control device. The annotation may include the frame that was displayed when the user created the annotation, text, audio, an image, or video selected by the viewer. The annotations include chronological information indicative of a chronological location of the annotation within the program. The annotations may include “rating annotations” indicating the author's subjective rating of a portion of the program that is in chronological proximity to the annotation's chronological location. | 12-24-2009 |
20100070878 | PROVIDING SKETCH ANNOTATIONS WITH MULTIMEDIA PROGRAMS - A method for collaborative sketch annotating of a program of multimedia content includes enabling a first user to create a sketch annotation, enabling the first user to store sketch annotation data related to the sketch annotation, and enabling a second user to access the sketch annotation. The second user may navigate the program using the sketch annotation and/or an indication of the sketch annotation. The first user may create the sketch annotation while viewing the program, for example, and the program may be paused for adding the sketch annotation to one or more paused frames. The sketch annotations may include chronological information indicative of a chronological location of the sketch annotation within the program. | 03-18-2010 |
20100070987 | MINING VIEWER RESPONSES TO MULTIMEDIA CONTENT - Viewers of a multimedia program are monitored to detect responses. Time data is stored with the responses and compared to responses from other viewers at the same time in the multimedia program. A viewer type is determined based on the responses. Further multimedia programs may be offered to the viewer based on the viewer type. Transducers and sensors placed within a viewing area may include, without limitation, audio sensors, video sensors, motion sensors, subdermal sensors, and biometric sensors. | 03-18-2010 |
20100071000 | GRAPHICAL ELECTRONIC PROGRAMMING GUIDE - Disclosed systems and methods present a graphics based electronic programming guide (EPG) that organizes available content in radial fashion on a display. Which content appears on a screen shot of the EPG may be determined using rating data, user preferences or collaborative filtering. Through collaborative filtering, disclosed embodiments may predict which programs a user may like according to group member ratings. Some disclosed EPGs include a mosaic with graphical indications of an overall rating and graphical indications of which of a plurality of characteristics (e.g., genres) apply to multimedia programs. | 03-18-2010 |
20100094628 | System and Method for Latency Reduction for Automatic Speech Recognition Using Partial Multi-Pass Results - A system and method is provided for reducing latency for automatic speech recognition. In one embodiment, intermediate results produced by multiple search passes are used to update a display of transcribed text. | 04-15-2010 |
20110225603 | Avatars in Social Interactive Television - Virtual environments are presented on displays along with multimedia programs to permit viewers to participate in a social interactive television environment. The virtual environments include avatars that are created and maintained in part using continually updated animation data that may be captured from cameras that monitor viewing areas in a plurality of sites. User input from the viewers may be processed in determining which viewers are presented in instances of the virtual environment. Continually updating the animation data results in avatars accurately depicting a viewer's facial expressions and other characteristics. Presence data may be collected and used to determine when to capture background images from a viewing area that may later be subtracted during the capture of animation data. Speech recognition technology may be employed to provide callouts within a virtual environment. | 09-15-2011 |
20110313764 | System and Method for Latency Reduction for Automatic Speech Recognition Using Partial Multi-Pass Results - A system and method is provided for reducing latency for automatic speech recognition. In one embodiment, intermediate results produced by multiple search passes are used to update a display of transcribed text. | 12-22-2011 |
20120075312 | Avatars in Social Interactive Television - Virtual environments are presented on displays along with multimedia programs to permit viewers to participate in a social interactive television environment. The virtual environments include avatars that are created and maintained in part using continually updated animation data that may be captured from cameras that monitor viewing areas in a plurality of sites. User input from the viewers may be processed in determining which viewers are presented in instances of the virtual environment. Continually updating the animation data results in avatars accurately depicting a viewer's facial expressions and other characteristics. Presence data may be collected and used to determine when to capture background images from a viewing area that may later be subtracted during the capture of animation data. Speech recognition technology may be employed to provide callouts within a virtual environment. | 03-29-2012 |
20130057556 | Avatars in Social Interactive Television - Virtual environments are presented on displays along with multimedia programs to permit viewers to participate in a social interactive television environment. The virtual environments include avatars that are created and maintained in part using continually updated animation data that may be captured from cameras that monitor viewing areas in a plurality of sites. User input from the viewers may be processed in determining which viewers are presented in instances of the virtual environment. Continually updating the animation data results in avatars accurately depicting a viewer's facial expressions and other characteristics. Presence data may be collected and used to determine when to capture background images from a viewing area that may later be subtracted during the capture of animation data. Speech recognition technology may be employed to provide callouts within a virtual environment. | 03-07-2013 |
20140033254 | DYNAMIC SYNCHRONIZATION OF MEDIA STREAMS WITHIN A SOCIAL NETWORK - A method of synchronizing first and second streams of a multimedia content program is operable for determining a temporal difference indicative of a relative timing between first and second streams of the program, the first stream being provided to a first multimedia processing resource (MPR) and the second stream being provided to a second MPR. The method includes manipulating at least one of the streams to reduce the temporal difference until the temporal difference is less than a predetermined threshold and enabling a viewer of the first stream to interact with a viewer of the second stream regarding the program. Interactions are visually detectable on a first display screen corresponding to the first MPR. | 01-30-2014 |
20150033278 | DYNAMIC SYNCHRONIZATION OF MEDIA STREAMS WITHIN A SOCIAL NETWORK - A method of synchronizing first and second streams of a multimedia content program is operable for determining a temporal difference indicative of a relative timing between first and second streams of the program, the first stream being provided to a first multimedia processing resource (MPR) and the second stream being provided to a second MPR. The method includes manipulating at least one of the streams to reduce the temporal difference until the temporal difference is less than a predetermined threshold and enabling a viewer of the first stream to interact with a viewer of the second stream regarding the program. Interactions are visually detectable on a first display screen corresponding to the first MPR. | 01-29-2015 |