Patent application number | Description | Published |
20130212501 | PERCEPTUAL COMPUTING WITH CONVERSATIONAL AGENT - Perceptual computing with a conversational agent is described. In one example, a method includes receiving a statement from a user, observing user behavior, determining a user context based on the behavior, processing the user statement and user context to generate a reply to the user, and presenting the reply to the user on a user interface. | 08-15-2013 |
20130243270 | SYSTEM AND METHOD FOR DYNAMIC ADAPTION OF MEDIA BASED ON IMPLICIT USER INPUT AND BEHAVIOR - A system and method for dynamically adapting media having multiple scenarios presented on a media device to a user based on characteristics of the user captured from at least one sensor. During presentation of the media, the at least one sensor captures user characteristics, including, but not limited to, physical characteristics indicative of user interest and/or attentiveness to subject matter of the media being presented. The system determines the interest level of the user based on the captured user characteristics and manages presentation of the media to the user based on determined user interest levels, selecting scenarios to present to the user on user interest levels. | 09-19-2013 |
20140003722 | ANALYZING STRUCTURED LIGHT PATTERNS | 01-02-2014 |
20140086476 | SYSTEMS, METHODS, AND COMPUTER PROGRAM PRODUCTS FOR HIGH DEPTH OF FIELD IMAGING - Methods, systems, and computer program products allow for the capturing of a high depth of field (DOF) image. A comprehensive depth map of the scene may be automatically determined. The scene may then be segmented, where each segment of the same corresponds to a respective depth of the depth map. A sequence of images may then be recorded, where each image in the sequence is focused at a respective depth of the depth map. The images of this sequence may then be interleaved to create a single composite image that includes the respective in-focus segments from these images. | 03-27-2014 |
20140096152 | TIMING ADVERTISEMENT BREAKS BASED ON VIEWER ATTENTION LEVEL - A device and method for timing advertisement breaks in video-on-demand applications based on viewer attention level includes a video device configured to display video content and receive biometric data indicative of the attention level of a viewer. The video device may notify a video-on-demand server that the attention level of the viewer has exceeded a threshold. In response to the notification, the video-on-demand server may determine a time to display advertisement content on the video device. The advertisement break time may be determined in relation to the video content. The advertisement content may be selected based on the video content. The video device may determine the viewer attention level during playback of the advertisement content and pause playback if the viewer attention level falls below the threshold. | 04-03-2014 |
20140195328 | ADAPTIVE EMBEDDED ADVERTISEMENT VIA CONTEXTUAL ANALYSIS AND PERCEPTUAL COMPUTING - Technologies for adaptively embedding an advertisement into media content via contextual analysis and perceptual computing include a computing device for detecting a location to embed advertising content within media content and retrieving user profile data corresponding to a user of a computing device. Such technologies may also include determining advertising content personalized for the user based on the retrieved user profile and embedding the advertising content personalized for the user into the media content at the detected location within the media content to generate augmented media content for subsequent display to the user. | 07-10-2014 |
20140292639 | MULTI-DISTANCE, MULTI-MODAL NATURAL USER INTERACTION WITH COMPUTING DEVICES - Systems and methods may provide for receiving a short range signal from a sensor that is collocated with a short range display and using the short range signal to detect a user interaction. Additionally, a display response may be controlled with respect to a long range display based on the user interaction. in one example, the user interaction includes one or more of an eye gaze, a hand gesture, a face gesture, a head position or a voice command, that indicates one or more of a switch between the short range display and the long range display, a drag and drop operation, a highlight operation, a click operation or a typing operation. | 10-02-2014 |
20150058764 | MEDIA CONTENT INCLUDING A PERCEPTUAL PROPERTY AND/OR A CONTEXTUAL PROPERTY - Apparatuses, systems, media and/or methods may involve creating content. A property component may be added to a media object to impart one or more of a perceptual property or a contextual property to the media object. The property component may be added responsive to an operation by a user that is independent of a direct access by the user to computer source code. An event corresponding to the property component may be mapped with an action for the media object. The event may be mapped with the action responsive to an operation by a user that is independent of a direct access by the user to computer source code. A graphical user interface may be rendered to create the content. In addition, the media object may be modified based on the action in response to the event when content created including the media object is utilized. | 02-26-2015 |
20150070386 | TECHNIQUES FOR PROVIDING AN AUGMENTED REALITY VIEW - Various embodiments are generally directed to techniques for providing an augmented reality view in which eye movements are employed to identify items of possible interest for which indicators are visually presented in the augmented reality view. An apparatus to present an augmented reality view includes a processor component; a presentation component for execution by the processor component to visually present images captured by a camera on a display, and to visually present an indicator identifying an item of possible interest in the captured images on the display overlying the visual presentation of the captured images; and a correlation component for execution by the processor component to track eye movement to determine a portion of the display gazed at by an eye, and to correlate the portion of the display to the item of possible interest. Other embodiments are described and claimed. | 03-12-2015 |
20150077325 | MOTION DATA BASED FOCUS STRENGTH METRIC TO FACILITATE IMAGE PROCESSING - Apparatuses, systems, media and/or methods may involve facilitating an image processing operation. User motion data may be identified when it user observes an image. A focus strength metric may be determined based on the user motion data. The focus strength metric may correspond to a focus area in the image. Also, a property of the focus strength metric may be adjusted. A peripheral area may be accounted for to determine the focus strength metric. A variation in a scan pattern may be accounted for to determine the focus strength metric. Moreover, a color may be imparted to the focus area and/or the peripheral area. In addition, a map may be formed based on the focus strength metric. The map may include a scan pattern map and a heat imp. The focus strength metric may be utilized to prioritize the focus area and/or the peripheral area in an image processing operation. | 03-19-2015 |