Patent application number | Description | Published |
20150099987 | HEART RATE VARIABILITY EVALUATION FOR MENTAL STATE ANALYSIS - A system and method for evaluating heart rate variability for mental state analysis is disclosed. Video of an individual is captured while the individual consumes and interacts with media. The video is analyzed to determine heart rate information with heart rate variability (HRV) being calculated and being understood to be in response to stimuli from the media. The analysis of heart rate variability is based upon a sympathovagal balance derived from a ratio of low frequency heart rate values to high frequency heart rate values. Heart rate variability is analyzed to determine changes in an individual's mental state related to the stimuli. Heart rate variability is determined and thereby mental state analysis is performed to evaluate media. | 04-09-2015 |
20150142553 | MENTAL STATE ANALYSIS FOR NORM GENERATION - Mental state data is gathered from a plurality of people and analyzed in order to determine mental state information. Metrics are generated based on the mental state information gathered as the people view media presentations. Norms, defined as the quantitative measures of the mental states of a plurality of people as they view the media presentation, are determined based on the mental state information metrics. The norms can be determined based on various viewer criteria including country of residence, demographic group, or device type on which the media presentation is viewed. Responses to new media are then compared against norms to determine the effectiveness of the new media presentations. | 05-21-2015 |
20150186912 | ANALYSIS IN RESPONSE TO MENTAL STATE EXPRESSION REQUESTS - Expression analysis is performed in response to a request for an expression. The expression is related to one or more mental states. The mental states include happiness, joy, satisfaction, and pleasure, among others. Images from one or more cameras capturing a user's attempt to provide the requested expression are received and analyzed. The analyzed images serve to gauge the response of the person to the request. Based on the response of the person to the request, the person can be rewarded for the effectiveness of his or her mental state expression. The intensity of the expression can also be used as a factor in determining the reward. The reward can include, but is not limited to, a coupon, digital coupon, currency, or virtual currency. | 07-02-2015 |
20150206000 | BACKGROUND ANALYSIS OF MENTAL STATE EXPRESSIONS - Expression analysis is performed via a background process and provided to foreground applications that register for emotion services. The foreground services are provided notification when a particular, previously determined emotional state is detected. The emotional state can be identified using facial feature analysis and/or gesture analysis. Upon receiving the notification of the state from the background process, the foreground services perform an emotion response action. The emotion response action can include sending a reply message indicating that a desired emotional response has been detected, providing a reward, and/or generating an automatic like on a social media system. | 07-23-2015 |
20150313530 | MENTAL STATE EVENT DEFINITION GENERATION - Analysis of mental states is provided based on videos of a plurality of people experiencing various situations such as media presentations. Videos of the plurality of people are captured and analyzed using classifiers. Facial expressions of the people in the captured video are clustered based on set criteria. A unique signature for the situation to which the people are being exposed is then determined based on the expression clustering. In certain scenarios, the clustering is augmented by self-report data from the people. In embodiments, the expression clustering is based on a combination of multiple facial expressions. | 11-05-2015 |
20150350730 | VIDEO RECOMMENDATION USING AFFECT - Analysis of mental states is provided to enable data analysis pertaining to video recommendation based on affect. Analysis and recommendation can be for socially shared livestream video. Video response may be evaluated based on viewing and sampling various videos. Data is captured for viewers of a video where the data includes facial information and/or physiological data. Facial and physiological information may be gathered for a group of viewers. In some embodiments, demographics information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos. | 12-03-2015 |
20160004904 | FACIAL TRACKING WITH CLASSIFIERS - Concepts for facial tracking with classifiers is disclosed. One or more faces are detected and tracked in a series of video frames that include at least one face. Video is captured and partitioned into the series of frames. A first video frame is analyzed using classifiers trained to detect the presence of at least one face in the frame. The classifiers are used to initialize locations for a first set of facial landmarks for the first face. The locations of the facial landmarks are refined using localized information around the landmarks, and a rough bounding box that contains the facial landmarks is estimated. The future locations for the facial landmarks detected in the first video frame are estimated for a future video frame. The detection of the facial landmarks and estimation of future locations of the landmarks are insensitive to rotation, orientation, scaling, or mirroring of the face. | 01-07-2016 |
20160078279 | IMAGE ANALYSIS USING A SEMICONDUCTOR PROCESSOR FOR FACIAL EVALUATION - Image analysis for facial evaluation is performed using logic encoded in a semiconductor processor. The semiconductor chip analyzes video images that are captured using one or more cameras and evaluates the videos to identify one or more persons in the videos. When a person is identified, the semiconductor chip locates the face of the evaluated person in the video. Facial regions of interest are extracted and differences in the regions of interest in the face are identified. The semiconductor chip uses classifiers to map facial regions for emotional response content and evaluate the emotional response content to produce an emotion score. The classifiers provide gender, age, or ethnicity with an associated probability. Localization logic within the chip is used to localize a second face when one is evaluated in the video. The one or more faces are tracked, and identifiers for the faces are provided. | 03-17-2016 |
20160081607 | SPORADIC COLLECTION WITH MOBILE AFFECT DATA - An individual can exhibit one or more mental states when reacting to a stimulus. A camera or other monitoring device can be used to collect, on an intermittent basis, mental state data including facial data. The mental state data can be interpolated between the intermittent collecting. The facial data can be obtained from a series of images of the individual where the images are captured intermittently. A second face can be identified, and the first face and the second face can be tracked. | 03-24-2016 |