Patent application number | Description | Published |
20120114172 | TECHNIQUES FOR FACE DETECTION AND TRACKING - Techniques are disclosed that involve face detection. For instance, face detection tasks may be decomposed into sets of one or more sub-tasks. In turn the sub-tasks of the sets may be allocated across multiple image frames. This allocation may be based on a resource budget. In addition, face tracking tasks may be performed. | 05-10-2012 |
20130009943 | MULTI-CORE PROCESSOR SUPPORTING REAL-TIME 3D IMAGE RENDERING ON AN AUTOSTEREOSCOPIC DISPLAY - A multi-core processor system may support 3D image rendering on an autostereoscopic display. The 3D image rendering includes pre-processing of depth map and 3D image wrapping tasks. The pre-processing of depth map may include a foreground prior depth image smoothing technique, which may perform a depth gradient detection and a smoothing task. The depth gradient detection task may detect areas with large depth gradient and the smoothing task may transform the large depth gradient into a linearly changing shape using low-strength, low-pass filtering techniques. The 3D image wrapping may include vectorizing the code for 3D image wrapping of row pixels using an efficient single instruction multiple data (SIMD) technique. After vectorizing, an API such as OpenMP may be used to parallelize the 3D image wrapping procedure. The 3D image wrapping using OpenMP may be performed on rows of the 3D image and on images of the multiple view images. | 01-10-2013 |
20130201187 | IMAGE-BASED MULTI-VIEW 3D FACE GENERATION - Systems, devices and methods are described including recovering camera parameters and sparse key points for multiple 2D facial images and applying a multi-view stereo process to generate a dense avatar mesh using the camera parameters and sparse key points. The dense avatar mesh may then be used to generate a 3D face model and multi-view texture synthesis may be applied to generate a texture image for the 3D face model. | 08-08-2013 |
20130271451 | PARAMETERIZED 3D FACE GENERATION - Systems, devices and methods are described including receiving a semantic description and associated measurement criteria for a facial control parameter, obtaining principal component analysis (PCA) coefficients, generating 3D faces in response to the PCA coefficients, determining a measurement value for each of the 3D faces based on the measurement criteria, and determining a regression parameters for the facial control parameter based on the measurement values. | 10-17-2013 |
20130276007 | Facilitating Television Based Interaction with Social Networking Tools - Video analysis may be used to determine who is watching television and their level of interest in the current programming. Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver. | 10-17-2013 |
20130276029 | Using Gestures to Capture Multimedia Clips - In response to a gestural command, a video currently being watched can be identified by extracting at least one decoded frame from a television transmission. The frame can be transmitted to a separate mobile device for requesting an image search and for receiving the search results. The search results can be used to obtain more information. The user's social networking friends can also be contacted to obtain more information about the clip. | 10-17-2013 |
20130293547 | GRAPHICS RENDERING TECHNIQUE FOR AUTOSTEREOSCOPIC THREE DIMENSIONAL DISPLAY - Various embodiments are presented herein that may render an image frame on an autostereoscopic 3D display. A computer platform including a processor circuit executing a rendering application may determine a current orientation of a virtual camera array within a three-dimensional (3D) scene and at least on additional 3D imaging parameter for the 3D scene. The rendering application, with the aid of a ray tracing engine, may also determine a depth range for the 3D scene. The ray tracing engine may then facilitate rendering of the image frame representative of the 3D scene using a ray tracing process. | 11-07-2013 |
20130332834 | ANNOTATION AND/OR RECOMMENDATION OF VIDEO CONTENT METHOD AND APPARATUS - Methods, apparatuses and storage medium associated with cooperative annotation and/or recommendation by shared and personal devices. In various embodiments, at least one non-transitory computer-readable storage medium may include a number of instructions configured to enable a personal device (PD) of a user, in response to execution of the instructions by the personal device, to receive a user input selecting performance of a user function in association with a video stream being rendered on a shared video device (SVD) configured for use by multiple users, render an image frame of the video stream rendered on the shared video device at a time proximate to a time of the user input, and facilitate performance of the user function, which may include annotation of video objects. Other embodiments, including recommendation of video content, may be disclosed or claimed. | 12-12-2013 |
20130340018 | PERSONALIZED VIDEO CONTENT CONSUMPTION USING SHARED VIDEO DEVICE AND PERSONAL DEVICE - Methods, apparatuses and storage medium associated with personal video content consumption using shared video device and personal device are disclosed herein. In various embodiments, a personal device (PD) method may include registering, by a personal device of a user, with a shared video device (SVD) configured for use by multiple users, or associating the SVD, by the PD, with the PD. The PD method may further include, after the registration or association, cooperating with the SVD, by the PD, to facilitate personalized video consumption by the user. In various embodiments, a SVD method may include similar registering or associating, and cooperating operations, performed by the SVD. In various embodiments, registration, association or cooperation may include facial and/or gesture recognition. Other embodiments may be disclosed or claimed. | 12-19-2013 |
20130342640 | OBJECT OF INTEREST BASED IMAGE PROCESSING - An apparatus, a method and a system are provided, wherein the system includes an encoding engine to encode and/or compress one or more objects of interest within individual image frames with higher bit densities than the bit density employed to encode and/or compress their background. The system may further include a context engine to identify a region of interest including at least a part of the one or more objects of interest, and scale the region of interest within individual image frames to emphasize the objects of interest. | 12-26-2013 |
20140003663 | METHOD OF DETECTING FACIAL ATTRIBUTES | 01-02-2014 |
20140026157 | FACE RECOGNITION CONTROL AND SOCIAL NETWORKING - Methods, apparatuses, and articles associated with face recognition login, social network and video chat are disclosed herein. In various embodiments, an apparatus may include a networking interface, and a face recognition based controller configured to determine whether a user is watching a television, based on image frames of a video signal generated by a camera. The controller may be further configured to transmit a login request, via the network interface, to a server associated with a social network, on determination that the user is watching the television, to log the user into the social network, and enabling video chat. Other embodiments may be disclosed and/or claimed. | 01-23-2014 |
20140033237 | TECHNIQUES FOR MEDIA QUALITY CONTROL - Techniques for media quality control may include receiving media information and determining the quality of the media information. The media information may be presented when the quality of the media information meets a quality control threshold. A warning may be generated when the quality of the media information does not meet the quality control threshold. Other embodiments are described and claimed. | 01-30-2014 |
20140033239 | NEXT GENERATION TELEVISION WITH CONTENT SHIFTING AND INTERACTIVE SELECTABILITY - Systems and methods for providing next generation television with content shifting and interactive selectability are described. In some examples, image content may be transferred from a television to smaller mobile computing device, and an example-based. visual search may be conducted on a selected portion of the content. Search results may then be provided to the mobile computing, device. In addition, avatar simulation may be undertaken. | 01-30-2014 |
20140035934 | Avatar Facial Expression Techniques - A method and apparatus for capturing and representing 3D wire-frame, color and shading of facial expressions are provided, wherein the method includes the following steps: storing a plurality of feature data sequences, each of the feature data sequences corresponding to one of the plurality of facial expressions; and retrieving one of the feature data sequences based on user facial feature data; and mapping the retrieved feature data sequence to an avatar face. The method may advantageously provide improvements in execution speed and communications bandwidth. | 02-06-2014 |
20140050358 | METHOD OF FACIAL LANDMARK DETECTION - Detecting facial landmarks in a face detected in an image may be performed by first cropping a face rectangle region of the detected face in the image and generating an integral image based at least in part on the face rectangle region. Next, a cascade classifier may be executed for each facial landmark of the face rectangle region to produce one response image for each facial landmark based at least in part on the integral image. A plurality of Active Shape Model (ASM) initializations may be set up. ASM searching may be performed for each of the ASM initializations based at least in part on the response images, each ASM search resulting in a search result having a cost. Finally, a search result of the ASM searches having a lowest cost function may be selected, the selected search result indicating locations of the facial landmarks in the image. | 02-20-2014 |
20140055554 | SYSTEM AND METHOD FOR COMMUNICATION USING INTERACTIVE AVATAR - A video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, determining facial characteristics from the face, including eye movement and eyelid movement of a user indicative of direction of user gaze and blinking, respectively, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters. | 02-27-2014 |
20140072172 | TECHNIQUES FOR FACE DETECETION AND TRACKING - Techniques are disclosed that involve face detection. For instance, face detection tasks may be decomposed into sets of one or more sub-tasks. In turn the sub-tasks of the sets may be allocated across multiple image frames. This allocation may be based on a multiple layer, quad-tree approach. In addition, face tracking tasks may be performed. | 03-13-2014 |
20140152758 | COMMUNICATION USING INTERACTIVE AVATARS - Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar; initiating communication; detecting a user input; identifying the user input; identifying an animation command based on the user input; generating avatar parameters; and transmitting at least one of the animation command and the avatar parameters. | 06-05-2014 |
20140156398 | PERSONALIZED ADVERTISEMENT SELECTION SYSTEM AND METHOD - A system and method for selecting an advertisement to present to a consumer includes detecting facial regions in the image, identifying one or more consumer characteristics (mood, gender, age, etc.) of said consumer in the image, identifying one or more advertisements to present to the consumer based on a comparison of the consumer characteristics with an advertisement database including a plurality of advertisement profiles, and presenting a selected one of the identified advertisement to the consumer on a media device. | 06-05-2014 |
20140195983 | 3D GRAPHICAL USER INTERFACE - Systems, apparatus, articles, and methods are described including operations for a 3D graphical user interface. | 07-10-2014 |
20140196083 | CONTENT-BASED CONTROL SYSTEM - Generally this disclosure describes a method for controlling the operation of a system based on a determination of content that is airing on a channel. A method may include transmitting at least one message including instructions to sample content from a channel, receiving a message indicating that certain content on the channel is complete, and activating a notification indicating that the certain content on the channel is complete. Another method may include receiving a message including instructions to sample content from a channel, sampling content from the channel, transmitting a message including the content sample, receiving a message comprising information related to the content sample, and determining whether certain content is complete on the channel based on the received information. | 07-10-2014 |
20140198121 | SYSTEM AND METHOD FOR AVATAR GENERATION, RENDERING AND ANIMATION - A video communication system that replaces actual live images of the participating users with animated avatars. The system allows generation, rendering and animation of a two-dimensional (2-D) avatar of a user's face. The 2-D avatar represents a user's basic face shape and key facial characteristics, including, but not limited to, position and shape of the eyes, nose, mouth, and face contour. The system further allows adaptive rendering for displaying allow different scales of the 2-D avatar to be displayed on associated different sized displays of user devices. | 07-17-2014 |
20140218371 | FACIAL MOVEMENT BASED AVATAR ANIMATION - Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently. | 08-07-2014 |
20140218459 | COMMUNICATION USING AVATAR - Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters. | 08-07-2014 |
20140223474 | INTERACTIVE MEDIA SYSTEMS - Generally this disclosure describes interactive media methods and systems. A method may include capturing an image, detecting at least one face in the image, determining an identity and expression corresponding to the at least one face, generating an icon for the at least one face based on the corresponding expression, and displaying the icon on a video monitor. | 08-07-2014 |
20140241574 | TRACKING AND RECOGNITION OF FACES USING SELECTED REGION CLASSIFICATION - Methods, apparatuses, and articles associated with facial tracking and recognition are disclosed. In embodiments, facial images may be detected in video or still images and tracked. After normalization of the facial images, feature data may be extracted from selected regions of the faces to compare to associated feature data in known faces. The selected regions may be determined using a boosting machine learning processes over a set of known images. After extraction, individual two-class comparisons may be performed between corresponding feature data from regions on the tested facial images and from the known facial image. The individual two-class classifications may then be combined to determine a similarity score for the tested face and the known face. If the similarity score exceeds a threshold, an identification of the known face may be output or otherwise used. Additionally, tracking with voting may be performed on faces detected in video. After a threshold of votes is reached, a given tracked face may be associated with a known face. | 08-28-2014 |
20140267413 | ADAPTIVE FACIAL EXPRESSION CALIBRATION - Technologies for generating an avatar with a facial expression corresponding to a facial expression of a user include capturing a reference user image of the user on a computing device when the user is expressing a reference facial expression for registration. The computing device generates reference facial measurement data based on the captured reference user image and compares the reference facial measurement data with facial measurement data of a corresponding reference expression of the avatar to generate facial comparison data. After a user has been registered, the computing device captures a real-time facial expression of the user and generates real-time facial measurement data based on the captured real-time image. The computing device applies the facial comparison data to the real-time facial measurement data to generate modified expression data, which is used to generate an avatar with a facial expression corresponding with the facial expression of the user. | 09-18-2014 |
20140267544 | SCALABLE AVATAR MESSAGING - Technologies for distributed generation of an avatar with a facial expression corresponding to a facial expression of a user include capturing real-time video of a user of a local computing device. The computing device extracts facial parameters of the user's facial expression using the captured video and transmits the extracted facial parameters to a server. The server generates an avatar video of an avatar having a facial expression corresponding to the user's facial expression as a function of the extracted facial parameters and transmits the avatar video to a remote computing device. | 09-18-2014 |
20140348434 | ACCELERATED OBJECT DETECTION FILTER USING A VIDEO MOTION ESTIMATION MODULE - Systems, apparatus and methods are described related to accelerated object detection filter using a video estimation module. | 11-27-2014 |
20150049079 | TECHNIQUES FOR THREEDIMENSIONAL IMAGE EDITING - Techniques for three-dimensional (3D) image editing are described. In one embodiment, for example, an apparatus may comprise a processor circuit and a 3D graphics management module, and the 3D graphics management module may be operable by the processor circuit to determine modification information for a first sub-image in a 3D image comprising the first sub-image and a second sub-image, modify the first sub-image based on the modification information for the first sub-image, determine modification information for the second sub-image based on the modification information for the first sub-image, and modify the second sub-image based on the modification information for the second sub-image. Other embodiments are described and claimed. | 02-19-2015 |