Class / Patent application number | Description | Number of patent applications / Date published |
348586000 | Foreground/background insertion | 22 |
20080252788 | Equalization of noise characteristics of the components of a composite image without degrading subject image quality - A visible mismatch in noise characteristics between a portion of a background scene inserted in composite image by a matte generated from a blue screen and a second portion of the same background inserted by a garbage matte is significantly reduced by adding extracted noise characteristics from the foreground image to the portion of the background scene inserted by the garbage matte. The selective addition of foreground noise characteristics to portions of the background scene significantly enhances the realistic look of a composite image. | 10-16-2008 |
20080303949 | MANIPULATING VIDEO STREAMS - Methods, systems and apparatus, including computer program products, for manipulating video streams in videoconference session. A reference background image is identified from a first video frame in a video stream of a videoconferencing environment. A subsequent video frame from the video stream is received. Areas of the subsequent video frame corresponding to a foreground area are identified. The foreground area includes pixels of the subsequent video frame that are different from corresponding pixels in the first video frame. The foreground area is transformed based on a selected image transformation. The transformed foreground area is composited onto the reference background image into a composite video frame. | 12-11-2008 |
20090066841 | Image processing method and computer readable medium - An image processing method includes the following steps. Firstly, foreground image data and background image data are received, wherein the foreground and the background image data corresponds to the same pixel in a display panel. Then, a foreground ratio and a background ratio are determined in response to an operational event. Next, a first value is obtained by multiplying the foreground image data by the foreground ratio, and a second value is obtained by multiplying the background image data by the background ratio. Then, an output image data is obtained by adding the first value and the second value. After that, the output image data is displayed at the pixel in the display panel. | 03-12-2009 |
20090122196 | SYSTEMS AND METHODS FOR ASSOCIATING METADATA WITH SCENES IN A VIDEO - Systems and methods for associating metadata with scene changes are described. At least one embodiment includes a system for associating metadata with a video. The system comprises an input module for reading a first video, wherein the input module is configured to receive special effects specified by a user and an insertion point in which to insert the special effects. The system further comprises a key frame module for identifying at least one key frame preceding the specified insertion point, the key frame comprising at least one of a scene change and a particular scene identified by the user. The system also includes a metadata module for calculating time differences between the insertion point and the one or more key frames, the metadata module further configured to store the special effects input by the user, the insertion point, the time differences, and key frames as metadata. | 05-14-2009 |
20100066910 | VIDEO COMPOSITING METHOD AND VIDEO COMPOSITING SYSTEM - Using a background-side image which is a shot of an arbitrary background and a material-side image which is a shot of a composition material against the same background, two images are compared to extract an image material to be composited. Images shot in any scenery can thus be used to extract a composition material, being part of the images, without using a certain background such as a blue screen. | 03-18-2010 |
20100134688 | IMAGE PROCESSING SYSTEM - A computer graphics generation system combines video images of a scene captured by a camera with one or more rendered computer generated objects. The system comprises a camera which is arranged to generate an image signal representative of a scene including a reference object of a predetermined shape, and an image processor. The image processor is arranged in operation to identify the reference object from the image signal, to detect a luminance distribution across a surface of the reference object by estimating a luminance magnitude at a plurality of surface points on the surface of the reference object, and to estimate a direction of light incident on the reference object derived from the detected luminance distribution across the surface of the reference object by calculating the average of a plurality of luminance vectors, each luminance vector corresponding to one of the surface points and comprising a luminance magnitude of the corresponding surface point and a luminance direction corresponding to a direction perpendicular to the surface at the corresponding surface point, wherein the luminance distribution across the surface of the reference object is detected for luminance above a threshold clipping level. Accordingly a reduction in the “wobble” of the computer generated objects in the scene can be achieved and a more stable image provided from the image light direction estimation. | 06-03-2010 |
20100225817 | Real Time Motion Picture Segmentation and Superposition - Various embodiments of separating a picture part of interest from an arbitrary background are described. The background may be a moving or still frame. The picture part of interest and background frames may be in or out of focus. One separation approach employs the difference between luminance and chrominance values of the input and background frames where changes in luminance from frame to frame are compensated for. In another approach, picture part of interest separation is based on spatial resolution differences between the background and the picture part of interest frames. Parameter matching can also be performed for the picture part of interest and the basic picture into which the picture part of interest is embedded. Further, a separated picture part of interest can be embedded into a basic picture containing text. | 09-09-2010 |
20110164185 | APPARATUS AND METHOD FOR PROCESSING IMAGE DATA - Provided are an image processing apparatus and method for extracting foreground data from among image data. The image processing apparatus generates background data and compares the background data with received data to extract a foreground. The foreground may be extracted using information regarding distances from an image acquiring unit to objects included in received data. | 07-07-2011 |
20110170007 | Image processing device, image control method, and computer program - There is provided an image processing device that includes an information superimposition portion, a display format acquisition portion, and a superimposition control portion. The information superimposition portion superimposes specified information on an input image and outputs the image with the superimposed information. The display format acquisition portion acquires information about a display format of an image that is currently being displayed. The superimposition control portion, based on the information that the display format acquisition portion has acquired about the display format of the image that is currently being displayed, performs control that relates to the superimposing of the superimposed information on the input image by the information superimposition portion. This configuration makes it possible to correctly superimpose information on an image irrespective of the image format. | 07-14-2011 |
20120120320 | SYSTEM AND METHOD FOR ON-THE-FLY KEY COLOR GENERATION - The video output system in a computer system reads pixel information from a frame buffer to generate a video output signal. In addition, a full-motion video may also be displayed. Reading from both the frame buffer and the full-motion video buffer when displaying the full-motion video window wastes valuable memory bandwidth. Thus, the disclosed system provides a system and methods for identifying where the video output system must read from the frame buffer and where it must read from the full-motion video buffer while minimizing the amount of area it reads from both the frame buffer and the full-motion video buffer. | 05-17-2012 |
20130162911 | DOWNSTREAM VIDEO COMPOSITION - A video source, a display and a method of processing multilayered video are disclosed. The video source decodes a multilayered video bit stream to transmit synchronized streams of decompressed video images and corresponding overlay images to an interconnected display. The display receives separate streams of video and overlay images. Transmission and reception of corresponding video and overlay images is synchronized in time. A video image received in the display can be selectively processed separately from its corresponding overlay image. The video image as processed at the display is later composited with its corresponding overlay image to form an output image for display. | 06-27-2013 |
20130182184 | VIDEO BACKGROUND INPAINTING - Several implementations provide inpainting solutions, and particular solutions provide spatial and temporal continuity. One particular implementation accesses first and second pictures that each include a representation of a background. A background value is determined for a pixel in an occluded area of the background in the first picture based on a source region in the first picture. A source region in the second picture is accessed that is related to the source region in the first picture. A background value is determined for a pixel in an occluded area of the background in the second picture using an algorithm that is based on the source region in the second picture. Another particular implementation displays a picture showing an occluded background region. Input is received that selects a fill portion and a source portion. An algorithm fills the fill portion based on the source portion, and display the resulting picture. | 07-18-2013 |
20130265495 | VIDEO COMMUNICATION DEVICE AND METHOD THEREOF - The present invention discloses a video communication device and a method thereof, the method comprises: obtain a current battery energy level of a device; load information of the battery energy level to a ready-to-send video; execute video encoding to the ready-to-send video and sending the same. The present invention can display the information of the energy level of one party's device on the video communication image of another party's device, so that another party can realize the actually current battery energy level of one party's device, tempos of the video communication can be controlled well, effects of the video communication can be raised. | 10-10-2013 |
20160037087 | IMAGE SEGMENTATION FOR A LIVE CAMERA FEED - Techniques are disclosed for segmenting an image frame of a live camera feed. A biasing scheme can be used to initially localize pixels within the image that are likely to contain the object being segmented. An optimization algorithm for an energy optimization function, such as a graph cut algorithm, can be used with a non-localized neighborhood graph structure and the initial location bias for localizing pixels in the image frame representing the object. Subsequently, a matting algorithm can be used to define a pixel mask surrounding at least a portion of the object boundary. The bias and the pixel mask can be continuously updated and refined as the image frame changes with the live camera feed. | 02-04-2016 |
20160112648 | PARALLEL VIDEO EFFECTS, MIX TREES, AND RELATED METHODS - Parallel video effects, mix trees, and related methods are disclosed. Video data inputs are mixed in parallel according to a mix parameter signal associated with one of the video data inputs. A resultant parallel mixed video data output is further mixed with a further video data input according to a composite mix parameter signal, which corresponds to a product of mix parameter signals that are based on mix parameter signals respectively associated with multiple video data inputs. The mix parameter signals could be alpha signals, in which case the composite mix parameter signal could correspond to a product of complementary alpha signals that are complements of the alpha signals. Composite mix parameter signals and mix results could be truncated based on a number of levels in a multi-level mix tree and an error or error tolerance. Rounding could be applied to a final mix output or an intermediate mix result. | 04-21-2016 |
348587000 | Including hue detection (e.g., chroma key) | 7 |
20080211966 | Image display device - A chroma key color determining section is operable to determine a chroma key color in an image of a plurality of input images. A image synthesizing section is operable to overlap the images with each other by switching the images with respect to each pixel based on an output of the chroma key color determining section. The plurality of images are switched by switching all of pixels of the image input to the chroma key color determining section into the chroma key color images. | 09-04-2008 |
20080211967 | IMAGING UNIT, PORTABLE TERMINAL DEVICE, AND PORTABLE TERMINAL SYSTEM - The present invention provides an imaging unit, a portable terminal device, and a portable terminal system capable of performing a satisfactory key synthesizing process. An imaging unit mainly includes an imaging section, a conversion section, and a key signal generating section. The conversion section converts the format of the imaged image data output from the imaging section from YUV format to RGB format. The key signal generating section generates a key signal based on each pixel data configuring the imaged image data and the reference data for the imaged image data input from the imaging section. The key signal generating section also outputs foreground image data having the generated key signal and the corresponding pixel data of RGB format as minimum configuring unit. An image synthesizing section of a main unit generates synthesized image data by overlapping the foreground image data from the imaging unit and the background image data stored in a RAM based on the key signal contained in the foreground image data. | 09-04-2008 |
20080231752 | Method for generating a clear frame from an image frame containing a subject disposed before a backing of nonuniform illumination - A method for extracting a clear frame from an image frame containing at least one or more subjects disposed before a backing, whose color and luminance vary smoothly from one backing area to another. The method replaces a subject's pixel signal levels with backing pixel signal levels. The method selects a known backing pixel and compares the RGB signal levels of an adjacent pixel with the RGB signal levels of the known backing pixel and identifies the adjacent pixel as a backing pixel when its RGB signal levels are each within an assigned tolerance of the RGB signal levels of the known backing pixel. This comparison is performed on all known backing pixels with unidentified adjacent pixels until all pixels in the image frame have been examined and all backing pixels identified. The method then replaces the RGB signal levels of the at least one subject's pixels with the RGB signal levels of backing pixels located at opposite edges of said at least one subjects by at least one of interpolation and or extrapolation across the at least one subject thereby removing said at least one subject to leave a clear frame. | 09-25-2008 |
20100253850 | VIDEO PRESENTATION SYSTEM - A video presentation system including a single physical computing device, a video capture device in communication with the physical computing device; and a computer readable medium readable by the physical computing device and including a video capture code segment for reading a video signal comprising a first plurality of pixels from the video capture device and storing the first set of pixels a buffer, an desktop capture code segment for capturing a content of a desktop of physical computing device and storing the content of the desktop in a buffer, and a chroma key code segment for setting a color value for a pixel in a buffer equal based on the color information of another pixel in another buffer. | 10-07-2010 |
20140125868 | APPARATUS AND METHOD FOR MIXING GRAPHICS WITH VIDEO IMAGES - In one embodiment, graphics frames are received, where each graphics frame includes one or more regions where pixels depict graphics that represent an on screen display (OSD) used to interact with a programmable multimedia controller, and a background region where pixels are set to the one or more predetermined colors. Further, video images are received, where at least some of the video images correspond to the plurality of graphics frames. Mixed images are created by mixing the graphics frames and the corresponding video images, the mixing to, where pixels of the graphics frame are not set to the one more predetermined colors, blend a color of at least some of the pixels of the graphics frame with a color of pixels of a corresponding video image, and, where pixels of the graphics frame are set to the one more predetermined colors, pass pixels of the corresponding video image. | 05-08-2014 |
20160044294 | SYSTEM AND METHOD FOR PERSONAL FLOATING VIDEO - Systems and methods for providing a user with a floating video of a subject are provided. The systems and methods generate a floating video by removing background pixels of a primary video using a background image/video, where the user records the primary video and the background image/video. The floating video may be generated on a recording device of the user and/or a remote server. Further, the user and/or the remote server may host the floating video. | 02-11-2016 |
20160156893 | METHOD OF AND APPARATUS FOR PROCESSING FRAMES IN A DATA PROCESSING SYSTEM | 06-02-2016 |