Entries |
Document | Title | Date |
20080204598 | Real-time film effects processing for digital video - A method, apparatus, and computer software for applying in real time imperfections to streaming video which causes the resulting digital video to resemble cinema film. | 08-28-2008 |
20090015718 | Video processing method - A video processing apparatus obtains information describing the amount of displacement of images in multiple predetermined units (such as video durations) imaged by an imaging apparatus from an image at steady state, which is a reference, and displays the images in the multiple predetermined units in decreasing order of amounts of displacement based on the obtained information describing the amounts of displacement. | 01-15-2009 |
20090040385 | METHODS AND SYSTEMS FOR CONTROLLING VIDEO COMPOSITING IN AN INTERACTIVE ENTERTAINMENT SYSTEM - An interactive video compositing device includes a chroma-key mixer, video switcher and control circuitry. The chroma-key mixer generates a composite image by combining a real-time image, such as one captured by a video recorder, with a prerecorded video image, such as a movie. The composite image includes the modified real-time image superimposed, or overlaid, onto the prerecorded image. The video switcher automatically selects either the composite image or the prerecorded image to be output to a display. The control circuitry controls the video switcher and other outputted signals based on data file information that corresponds to content of the prerecorded image or media. For example, the data files may contain information relating to the presence (or absence) of a particular character in a movie scene, thus allowing for the output and display, at appropriate times, of the real-time composite image instead of the prerecorded image. | 02-12-2009 |
20090059076 | GENERATING DEVICE FOR SEQUENTIAL INTERLACED SCENES - The present invention relates to a generating device for sequential interlaced scenes comprising: an interlaced scene generating unit; a video signal input unit, which electrically connects with the interlaced scene generating unit, for receiving a first scene video signal and a second scene video signal, and transferring the first scene video signal and the second scene video signal to the interlaced scene generating unit; wherein the interlaced scene generating unit is able to generate an interlaced scene video signal, and the interlaced scene video signal is combined by the first scene video signal and the second scene video signal in interlacing and by time. | 03-05-2009 |
20090079871 | ADVERTISEMENT INSERTION POINTS DETECTION FOR ONLINE VIDEO ADVERTISING - Systems and methods for determining insertion points in a first video stream are described. The insertions points being configured for inserting at least one second video into the first video. In accordance with one embodiment, a method for determining the insertion points includes parsing the first video into a plurality of shots. The plurality of shots includes one or more shot boundaries. The method then determines one or more insertion points by balancing a discontinuity metric and an attractiveness metric of each shot boundary. | 03-26-2009 |
20090115904 | SYSTEM AND METHOD OF DISPLAYING A VIDEO STREAM - The present disclosure is generally directed to a video stream processing system and to a method of displaying a video stream. In a particular embodiment, the method includes, during a first time period, displaying a first version of a received video stream while recovering a second version of the received video stream, the first version of the received video stream having a lower video display quality than the second version of the received video stream. The first time period begins no more than approximately 100 milliseconds after a detected channel change. The method also includes switching from display of the first version of the received video stream to display of the second version of the received video stream during a second time period. | 05-07-2009 |
20090122195 | System and Method for Combining Image Sequences - A system and method combines videos for display in real-time. A set of narrow-angle videos and a wide-angle video are acquired of the scene, in which a field of view in the wide-angle video substantially overlaps the fields of view in the narrow-angle videos. Homographies are determined among the narrow-angle videos using the wide-angle video. Temporally corresponding selected images of the narrow-angle videos are transformed and combined into a transformed image. Geometry of an output video is determined according to the transformed image and geometry of a display screen of an output device. The homographies and the geometry of the display screen are stored in a graphic processor unit, and subsequent images in the set of narrow-angle videos are transformed and combined by the graphic processor unit to produce an output video in real-time. | 05-14-2009 |
20090167949 | Method And Apparatus For Performing Edge Blending Using Production Switchers - A video production switcher comprises a number of mix effects units, each mix effects unit providing a video output signal for use in displaying images on a display; a memory for storing an image; and a controller for (a) mapping the stored image to a global space, the global space associated with the display, and (b) for determining a number of viewports in the global space, each viewport associated with one of the number of mix effects units, a portion of the stored image and a screen of the display; and wherein those viewports associated with adjacent screens of the display overlap. | 07-02-2009 |
20090237564 | INTERACTIVE IMMERSIVE VIRTUAL REALITY AND SIMULATION - An immersive audio-visual system (and a method) for creating an enhanced interactive and immersive audio-visual environment is disclosed. The immersive audio-visual environment enables participants to enjoy true interactive, immersive audio-visual reality experience in a variety of applications. The immersive audio-visual system comprises an immersive video system, an immersive audio system and an immersive audio-visual production system. The video system creates immersive stereoscopic videos that mix live videos, computer generated graphic images and human interactions with the system. The immersive audio system creates immersive sounds with each sound resource positioned correct with respect to the position of an associated participant in a video scene. The immersive audio-video production system produces an enhanced immersive audio and videos based on the generated immersive stereoscopic videos and immersive sounds. A variety of applications are enabled by the immersive audio-visual production including casino-type interactive gaming system and training system. | 09-24-2009 |
20090237565 | VIDEO COMPOSITING SYSTEMS FOR PROVIDING INTERACTIVE ENTERTAINMENT - An interactive video compositing device includes a chroma-key mixer, video switcher and control circuitry. The chroma-key mixer generates a composite image by combining a real-time image, such as one captured by a video recorder, with a prerecorded video image, such as a movie. The composite image includes the modified real-time image superimposed, or overlaid, onto the prerecorded image. The video switcher automatically selects either the composite image or the prerecorded image to be output to a display. The control circuitry controls the video switcher and other outputted signals based on data file information that corresponds to content of the prerecorded image or media. For example, the data files may contain information relating to the presence (or absence) of a particular character in a movie scene, thus allowing for the output and display, at appropriate times, of the real-time composite image instead of the prerecorded image. | 09-24-2009 |
20090237566 | METHODS FOR INTERACTIVE VIDEO COMPOSITING - An interactive video compositing device includes a chroma-key mixer, video switcher and control circuitry. The chroma-key mixer generates a composite image by combining a real-time image, such as one captured by a video recorder, with a prerecorded video image, such as a movie. The composite image includes the modified real-time image superimposed, or overlaid, onto the prerecorded image. The video switcher automatically selects either the composite image or the prerecorded image to be output to a display. The control circuitry controls the video switcher and other outputted signals based on data file information that corresponds to content of the prerecorded image or media. For example, the data files may contain information relating to the presence (or absence) of a particular character in a movie scene, thus allowing for the output and display, at appropriate times, of the real-time composite image instead of the prerecorded image. | 09-24-2009 |
20090244385 | INFORMATION DISPLAY APPARATUS AND INFORMATION DISPLAY METHOD - An information display apparatus includes an information acquisition unit configured to acquire a plurality of information items through a network in accordance with an acquisition script in a scenario, the scenario including a conversion script and a motion addition script, an information conversion unit configured to extract one or more parts to be displayed from each information item acquired by the information acquisition unit in accordance with the conversion script included in the scenario, a motion addition unit configured to process all or some of the parts extracted by the information conversion unit, to be displayed with changing in content automatically and/or with an audio output, respectively, in accordance with the motion addition script included in the scenario. | 10-01-2009 |
20090310023 | ONE PASS VIDEO PROCESSING AND COMPOSITION FOR HIGH-DEFINITION VIDEO - A video composition model that provides a set of application programming interfaces (APIs) to set device contexts, and determine capabilities of graphics hardware from a device driver. After the model determines a configuration, the model determines input video stream states applicable to frame rates, color-spaces and alpha indexing of input video streams, interactive graphics, and background images. The model prepares the input video frames and reference frames, as well as a frame format and input/output frame index information. The input video streams, interactive graphics and background images are processed individually and mixed to generate an output video stream. | 12-17-2009 |
20100060793 | METHOD AND SYSTEM FOR FUSING VIDEO STREAMS - It is provided a method for converting video streams capturing a scene to video streams fitting a viewpoint configuration. The video streams are provided by video cameras, and the method includes receiving parameters associated with the viewpoint configuration, and converting video streams of the scene as captured by the video cameras to video streams fitting the viewpoint configuration. The viewpoint configuration may be a dynamic viewpoint configuration determined by a joystick module. The converting may be done using a three dimensional method, and includes separating objects of the scene from other portions of the scene. The method may includes integrating the scene into the video stream of a certain scene associated with the viewpoint configuration. Sometimes the viewpoint configuration includes a varying zoom and the converting is done using one of several methods, transition between adjacent video cameras having different zoom values, real zooming of a video camera having a motorized zoom, and digital zooming in of video streams. The converting may be done on video streams which have been captured the scene in advance before commencing the converting. | 03-11-2010 |
20100149419 | MULTI-VIDEO SYNTHESIS - Embodiments that provide multi-video synthesis are disclosed. In accordance with one embodiment, multi-video synthesis includes breaking a main video into a plurality of main frames and break a supplementary video into a plurality of supplementary frames. The multi-video synthesis also includes assigning one or more supplementary frames into each of a plurality of states of a Hidden Markov Model (HMM), where each of the plurality of states corresponding to one or more main frames. The multi-video synthesis further includes determining optimal frames in the plurality of main frames for insertion of the plurality of supplementary frames based on the plurality of states and visual properties. The optimal frames include optimal insertion positions. The multi-video synthesis additionally includes inserting the plurality of supplementary frames into the optimal insertion positions to form a synthesized video. | 06-17-2010 |
20100201881 | RECEIVING APPARATUS AND METHOD FOR DISPLAY OF SEPARATELY CONTROLLABLE COMMAND OBJECTS,TO CREATE SUPERIMPOSED FINAL SCENES - A receiving apparatus and method for display of final superimposed scenes from a receiver adapted to receive shared object control information used for forming final superimposed scenes and display final superimposed scenes. The final superimposed scenes are formed by superimposing two or more shared scenes each comprising one or more shared objects. The shared object comprises user-selectable command objects that are separately controllable independent of the shared scenes. | 08-12-2010 |
20100253847 | TWO-STAGE DIGITAL PROGRAM INSERTION SYSTEM - Apparatus and methods are provided for inserting advertisements and/or to perform grooming functions after a video, audio and/or data stream has been transrated and/or encrypted. In this manner, ad insertion and grooming can be performed close to the edge of a video distribution network. Transrating and encryption of a program into which content is to be later inserted can be accomplished before the program is transmitted. Thus, a single encrypted version of a program can be transmitted from a central point in the network to multiple recipients, while providing the benefits of subsequent targeted ad insertion or grooming downstream of the central point. | 10-07-2010 |
20100253848 | DISPLAYING IMAGE FRAMES IN COMBINATION WITH A SUBPICTURE FRAME - An aspect of the present invention reduces memory accesses while forming combined images of a subpicture frame with each of a sequence of image frames. In an embodiment, line information indicating the specific lines (e.g., rows) of the subpicture frame having display information is first generated. The line information is then used to retrieve only those lines of the subpicture frame that have display information while forming combined frames. According to another aspect, the portions of the combined images having sharp edges due to the inclusion of the subpicture frame are identified based on the line information, and such portions are then filtered. The sequence of combined frames may be displayed on a video output device. | 10-07-2010 |
20100253849 | VIDEO PROCESSING DEVICE, VIDEO DISPLAY DEVICE, AND VIDEO PROCESSING METHOD - A video processing device that divides and processes video data representing a video for one screen includes: an input unit that receives input of the video data; plural image processing units that are provided to correspond to respective plural areas obtained by dividing the video data, receive image data corresponding to the areas, and apply predetermined image processing to image data; an image-data extending unit that acquires image data, which is required by the image processing unit that processes an area adjacent to each of the areas, prior to the image processing by each of the image processing units from the image data corresponding to the area received by each of the image processing units, inputs the image data to the adjacent image processing unit, extends the image data corresponding to the area received by each of the image processing units, and sets the image data as a target of the predetermined image processing by each of the image processing units; and an image combining unit that receives the image data processed by the plural image processing units and reconfigures the screen. | 10-07-2010 |
20100295995 | METHOD AND SYSTEM FOR TOASTED VIDEO DISTRIBUTION - The systems and methods disclosed transmit a composite channel to a receiver. The composite channel may be a static channel that contains different original channels of content in different locations on a displayed page, or may be a dynamic channel that is processed by the receiver to display a multiple different video streams on a single display device. | 11-25-2010 |
20100309376 | Multimedia Presenting System, Multimedia Processing Apparatus Thereof, and Method for Presenting Video and Audio Signals - A multimedia presenting system includes a display apparatus, a sound apparatus, and a multimedia processing unit. The multimedia processing unit includes a processor, a factor generator, and a mixer. The display apparatus displays video signals. The sound apparatus broadcasts audio signals. The processor processes a first video signal related to a first audio signal and a second video signal related to a second audio signal, and obtains presenting information of the first and second video signals on the display apparatus. The factor generator generates a factor according to the presenting information. The mixer adjusts the first and second audio signals according to the factor and mixes the adjusted first and second audio signals. | 12-09-2010 |
20110001879 | METHOD AND SYSTEM FOR TOASTED VIDEO DISTRIBUTION - The systems and methods disclosed transmit a composite channel to a receiver. The composite channel may be a static channel that contains different original channels of content in different locations on a displayed page, or may be a dynamic channel that is processed by the receiver to display a multiple different video streams on a single display device. | 01-06-2011 |
20110025918 | METHODS AND SYSTEMS FOR CONTROLLING VIDEO COMPOSITING IN AN INTERACTIVE ENTERTAINMENT SYSTEM - An interactive video compositing device includes a chroma-key mixer, video switcher and control circuitry. The chroma-key mixer generates a composite image by combining a real-time image, such as one captured by a video recorder, with a prerecorded video image, such as a movie. The composite image includes the modified real-time image superimposed, or overlaid, onto the prerecorded image. The video switcher automatically selects either the composite image or the prerecorded image to be output to a display. The control circuitry controls the video switcher and other outputted signals based on data file information that corresponds to content of the prerecorded image or media. For example, the data files may contain information relating to the presence (or absence) of a particular character in a movie scene, thus allowing for the output and display, at appropriate times, of the real-time composite image instead of the prerecorded image. | 02-03-2011 |
20110043702 | INPUT CUEING EMMERSION SYSTEM AND METHOD - The present invention provides an input cueing system and method that allows the user to manually draw an image, input text, interface and gesture on an input surface, which is then brought into a computer such that the visual output from the computer is combined in an overlapping manner with the visual imagery of the user's hands, and then shown on a display. Located above the drawing surface is an image capturing device that captures live video images of the user's hands or other objects placed on the drawing surface. One or more reflectors and/or image repeating devices are disposed of between the input surface and the image capturing device to effectively reduce the height and/or focal length so that the visual image is properly aligned and oriented to provide a real ‘live’ view of the users hands and/or action on the display. In one embodiment, the system is used with a desktop computer and a display. In further embodiments, the system is incorporated into a laptop computer, a slate, a PDA, or a cellular telephone with a built-in display. In various embodiments, a combiner module is used to combine the visual action occurring on and/or about the input surface by an image capturing device with the visual output from a computer or computing device, so that the resulting combined visual imagery may be simultaneously transmitted and on a display, with the users hands, fingers and/or tools shown in a semi-transparent and/or opaque manner. | 02-24-2011 |
20110075035 | Method and System for Motion Compensated Temporal Filtering Using Both FIR and IIR Filtering - Certain aspects of a method and system for motion compensated temporal filtering using both finite impulse response (FIR) and infinite impulse response (IIR) filtering may include blending at least one finite impulse response (FIR) filtered output picture of video data and at least one infinite impulse response (IIR) filtered output picture of video data to generate at least one blended non-motion compensated output picture of video data. A motion compensated picture of video data may be generated utilizing at least one previously generated output picture of video data and at least one current input picture of video data. A motion compensated picture of video data may be blended with at least one current input picture of video data to generate a motion compensated output picture of video data. The generated motion compensated output picture of video data and the generated non-motion compensated output picture of video data may be blended to generate at least one current output picture of video data. | 03-31-2011 |
20110102678 | Key Generation Through Spatial Detection of Dynamic Objects - A method, apparatus, and computer program product are described that utilizes spatial modeling to represent foreground objects of an event to allow virtual graphics to be integrated into a background of the event in the presence of dynamic objects. The present invention detects a presence of dynamic objects within a region of interest from a video depicting the event. The present invention produces a suppression key corresponding to the dynamic object when present in the video or a suppression key with a default value when and where no dynamic object is present in the video. | 05-05-2011 |
20110102679 | Selectively Applying Spotlight and Other Effects Using Video Layering - A video layer effects processing system which receives normal video and special effects information on separate layers has been presented. The system selectively mixes various video layers to transmit a composite video signal for a video display such as a television, or a virtual reality system. Special effects include spotlights, zooming, etc. Additional special effects such as shaping of objects and ghost effects are created by masking and superimposing selected video layers. The selective mixing, for example, to enable or disable, strengthen or weaken, or otherwise arrange special effects, can be directed from a remote source or locally by a user through real-time control or prior setup. The video layer effects processing system can also be incorporated into a set-top-box or a local consumer box. | 05-05-2011 |
20110109802 | VIDEO COMBINING SYSTEM AND METHOD - A video combining system includes a number of cameras, a startup module, and a combining module. The startup module starts the number of cameras in turn. The number of cameras captures a scene at different moments to obtain a number of images. The combining module chronologically combines the number of images to obtain a section of video. The section of video has a high frame rate. | 05-12-2011 |
20110115978 | COORDINATED VIDEO FOR TELEVISION DISPLAY - Apparatus and methods for generating coordinated video content for display. The apparatus takes video signals from independent sources and allows a user to select portions of the video signals, corresponding to desired portions of video content to be displayed, and combines those video signal portions into a single composite video signal. The composite video signal may be displayed, for example, on a television screen showing the desired portions of video content. | 05-19-2011 |
20110249186 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING SYSTEM - There is provided an image processing apparatus including: a plurality of signal processing circuits that compose video signals; and a communication path that connects the plurality of signal processing circuits, wherein any one of the plurality of signal processing circuits composes a video signal obtained by composition by the signal processing circuit, and a video signal obtained by composition by another signal processing circuit supplied from the another signal processing circuit via the communication path. | 10-13-2011 |
20110273619 | IMAGE PROCESSING APPARATUS, METHOD, AND STORAGE MEDIUM - An image processing apparatus includes an acquisition unit configured to acquire a video, a superimposition unit configured to superimpose an image onto the video acquired by the acquisition unit, and a detection unit configured to detect the emergence of an object in a video in a detection area set on the video acquired by the acquisition unit, wherein the superimposition unit superimposes an image corresponding to the size of the object to be detected when emerging by the detection unit onto the video in the detection area, and outputs the resultant video to the detection unit. | 11-10-2011 |
20110273620 | REMOVAL OF SHADOWS FROM IMAGES IN A VIDEO SIGNAL - Substantially removing shadows in video images obtained from a camera viewing a scene, at variable pointing angles and magnification, by digital processing of a sequence of frames in the video signal, the processing including—(a) creating and maintaining a model image of the scene, by accumulating image data from a plurality of video frames; (b) in the model image, detecting and defining model shadow zones; (c) calculating correction factors for the model shadow zones; (d) for each frame of the video signal—defining, in the image carried by the signal, shadow zones that correspond to respective model shadow zones; and (e) correcting video signal values in shadow zones accordingly. | 11-10-2011 |
20110298983 | Content Processing Apparatus and Content Processing Method - According to one embodiment, a content processing apparatus is provided. The content processing apparatus includes: an output module which outputs a content in a viewable format; a real-time term explanation receiving processor which receives an explanation of a term included in the content being output; and a video and term explanation combining module which combines a video of the content with the term explanation. The term explanation for the video is displayed in real-time on the output module. | 12-08-2011 |
20110310302 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND PROGRAM - An image processing apparatus includes a superimposition processing unit that performs a blending process for a plurality of continuously captured images, wherein the superimposition processing unit is configured to selectively input luminance signal information of a RAW image or a full color image as a processing target image and perform a superimposition process, and performs a process for sequentially updating data that is stored in a memory for storing two image frames so as to enable superimposition of any desired number of images. | 12-22-2011 |
20120019724 | METHOD FOR CREATING INTERACTIVE APPLICATIONS FOR TELEVISION - Methods, apparatuses, and systems for creating an overlay application for use within a broadcast communications system are disclosed. A method in accordance with one or more embodiments of the present invention comprises collecting image data from a computer network, generating at least one selectable area within the image data, associating a function with the at least one selectable area, and selectively displaying the image data on a monitor simultaneously with a broadcast data stream, wherein selection of the at least one selectable area executes the associated function. | 01-26-2012 |
20120019725 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD AND PROGRAM - An image processing apparatus includes a plurality of addition means and an image processing means. The addition means performs addition processing of adding pixels of a differential image at a second resolution representing a difference between an inputted image at a first resolution and an image at the second resolution higher than the first resolution as pixels of an inputted image at the second resolution. The image processing means is configured to perform second and subsequent addition processing, and generate an image of the second resolution as a processing result by performing the addition processing for a predetermined number of times. The addition processing is performed with inputs of an image at the first resolution and an image at the second resolution obtained by an immediately preceding addition processing, which are different from each other. | 01-26-2012 |
20120044421 | PROJECTOR APPARATUS AND METHOD FOR DYNAMICALLY MASKING OBJECTS - A projector for masking out one or more objects between the projector and a screen. The projector includes an image capture sensor configured to produce a captured screen image. The projector further includes a processing device configured to receive a video-in signal, create a masking image that corresponds to at least a portion of the captured screen image, and produce a video-out signal, wherein the video-out signal comprises a combination of the video-in signal and the masking image. A panel controller in the projector receives the video-out signal and causes at least one or more pixels of at least one panel to be closed based upon the masking image such that any objects positioned between the projector and the screen are masked from any light projected by the projector. | 02-23-2012 |
20120075531 | Apparatus and method for client-side compositing of video streams - The present invention relates to an apparatus and method for client-side compositing of video streams. The method includes receiving, by the video display device, a multiplexed data stream from a remote video server. The multiplexed data stream includes substreams, where the substreams includes a descriptor substream, at least one media substream, and a compositing-instruction substream. The method further includes demultiplexing, by the video display device, the multiplexed data stream into the substreams according to the descriptor substream and displaying, by the video display device, the at least one media substream on a display screen according to the compositing-instruction substream, where the compositing-instruction substream includes instructions on a composition of the at least one media substream. | 03-29-2012 |
20120075532 | METHOD AND APPARATUS FOR CONSTRUCTING COMPOSITE VIDEO IMAGES - A method renders video apparatus testing more efficient by consolidating important information from many video images into one or more composite video images. According to an exemplary embodiment, the method includes steps of receiving location information for portions of first and second video images, and combining the portions of the first and second video images into a third video image for display. | 03-29-2012 |
20120081611 | ENHANCING VIDEO PRESENTATION SYSTEMS - A video system comprises a shared media accessor configured to access shared media. The shared media is configured to be displayed on a first display screen and a second display screen. A video accessor configured to access images from a first camera. A field of view of the first camera is oriented such that the first camera can capture the images comprising non-verbal communication of a user associated with the shared media. A video compositor configured to composite the images captured by the first camera and said shared media. The composited images are configured to be displayed on the second display screen. | 04-05-2012 |
20120105727 | OBJECT-BASED AUDIO-VISUAL TERMINAL AND BITSTREAM STRUCTURE - As information to be processed at an object-based video or audio-visual (AV) terminal, an object-oriented bitstream includes objects, composition information, and scene demarcation information. Such bitstream structure allows on-line editing, e.g. cut and paste, insertion/deletion, grouping, and special effects. In the interest of ease of editing, AV objects and their composition information are transmitted or accessed on separate logical channels (LCs). Objects which have a lifetime in the decoder beyond their initial presentation time are cached for reuse until a selected expiration time. | 05-03-2012 |
20120127369 | VIDEO PROCESSOR DEVICE AND VIDEO PROCESSING METHOD - A video processor and method for swiftly detecting swaps occurring between links during transmission of images by the dual-link system. A first image combiner unit combines an image D | 05-24-2012 |
20120262630 | Minimal Decoding Method for Spatially Multiplexing Digital Video Pictures - A combined video image is created from a plurality of video images. Each video image has a plurality of video image components, and each video image component has an image component header. The image header is removed from each video image to be included in the combined video image, and a new image header is generated for the combined video image. The image component header of each video image component to be included in the combined video image is altered to set an image position for the video image component within the combined video image. The combined video image is generated by concatenating the new image header with the plurality of video images having no image headers and the video image components having the altered image component headers. | 10-18-2012 |
20120268655 | Graphics Display System with Anti-Flutter Filtering and Vertical Scaling Feature - A graphics integrated circuit chip is used in a set-top box for controlling a television display. The graphics chip processes analog video input, digital video input, and graphics input. The chip includes a single polyphase filter that preferably provides both anti-flutter filtering and scaling of graphics. Anti-flutter filtering may help reduce display flicker due to the interlaced nature of television displays. The scaling of graphics may be used to convert the normally square pixel aspect ratio of graphics to the normally rectangular pixel aspect ratio of video. | 10-25-2012 |
20130002957 | APPARATUS AND METHOD FOR LOCAL VIDEO DETECTOR FOR MIXED CADENCE SEQUENCE - Aspects of the invention are directed towards an apparatus and method for detecting local video pixels in mixed cadence video. The local video detector comprises a comb detector that is adaptive to the contour of moving objects and local contrast, a motion detector that is robust to false motion due to vertical details, and a fader value estimator that provides a video confidence value to a fader that combines film mode and video mode processing results. The coupling of the local video detector to a film mode detector increases the robustness, accuracy, and efficiency of local film/video mode processing as compared to the prior art. | 01-03-2013 |
20130002958 | Splitter/Combiner for CATV Networks - A splitter circuit means for use with a CATV network comprising: a first signal input for receiving a CATV signal; a first splitter for splitting the CATV signal into a first split signal and a second split signal; a second signal input for receiving a MoCA signal; a second splitter for splitting the MoCA signal into a third split signal and a fourth split signal; a first diplex filter arranged to lowpass filter the first split signal and highpass filter the third split signal and to combine said filtered signals into a first combined signal to be supplied in a first output; and a second diplex filter arranged to lowpass filter the second split signal and highpass filter the fourth split signal and to combine said filtered signals into a second combined signal to be supplied in a second output. | 01-03-2013 |
20130093955 | METHOD AND APPARATUS FOR DISPLAYING VIDEO IMAGE - A method for displaying a video image includes acquiring foreground information about a video image to be output, where the foreground information includes information that defines a size of a foreground picture. The method further includes determining an adjustment coefficient for the foreground picture according to the size of the foreground picture, a size and a resolution of a display device, and a preset adjustment rule. The preset adjustment rule indicates that the product of the adjustment coefficient for the foreground picture and a zooming multiple for display on the display device is equal to a fixed constant. The method also includes adjusting the video image to be output according to the adjustment coefficient for the foreground picture, and outputting, to the display device for display, the video image after adjustment. | 04-18-2013 |
20130182183 | Hardware-Based, Client-Side, Video Compositing System - A system for video compositing is comprised of a storage device for storing a composite timeline file. A timeline manager reads rendering instructions and compositing instructions from the stored file. A plurality of filter graphs, each receiving one of a plurality of video streams, renders frames therefrom in response to the rendering instructions. A uniform resource locator (URL) incorporator generates URL based content. Hardware is responsive to the rendered frames, URL based content, and compositing instructions for creating a composite image. A frame scheduler is responsive to the plurality of filter graphs for controlling a frequency at which the hardware creates a new composite image. An output is provided for displaying the composite image. Methods of generating a composite work and methods of generating the timeline file are also disclosed. Because of the rules governing abstracts, this Abstract should not be used to construe the claims. | 07-18-2013 |
20130188094 | Combining multiple video streams - Methods, computer-readable media, and systems are provided for combining multiple video streams. One method for combining the multiple video streams includes extracting a sequence of media frames ( | 07-25-2013 |
20130229581 | JUXTAPOSING STILL AND DYNAMIC IMAGERY FOR CLIPLET CREATION - Various technologies described herein pertain to juxtaposing still and dynamic imagery to create a cliplet. A first subset of a spatiotemporal volume of pixels in an input video can be set as a static input segment, and the static input segment can be mapped to a background of the cliplet. Further, a second subset of the spatiotemporal volume of pixels in the input video can be set as a dynamic input segment based on a selection of a spatial region, a start time, and an end time within the input video. Moreover, the dynamic input segment can be refined spatially and/or temporally and mapped to an output segment of the cliplet within at least a portion of output frames of the cliplet based on a predefined temporal mapping function, and the output segment can be composited over the background for the output frames of the cliplet. | 09-05-2013 |
20130314602 | Object-Based Audio-Visual Terminal And Bitstream Structure - As information to be processed at an object-based video or audio-visual (AV) terminal, an object-oriented bitstream includes objects, composition information, and scene demarcation information. Such bitstream structure allows on-line editing, e.g. cut and paste, insertion/deletion, grouping, and special effects. In the interest of ease of editing, AV objects and their composition information are transmitted or accessed on separate logical channels (LCs). Objects which have a lifetime in the decoder beyond their initial presentation time are cached for reuse until a selected expiration time. The system includes a de-multiplexer, a controller which controls the operation of the AV terminal, input buffers, AV objects decoders, buffers for decoded data, a composer, a display, and an object cache. | 11-28-2013 |
20130321704 | OPTIMIZED ALGORITHM FOR CONSTRUCTION OF COMPOSITE VIDEO FROM A SET OF DISCRETE VIDEO SOURCES - A method includes reading a composite video descriptor data structure and a plurality of window descriptor data structures. The composite video descriptor data structure defines a width and height of a composite video frame and each window descriptor data structure defines the starting X and Y coordinate, width and height of each constituent video window to be rendered in the composite video frame. The method further includes determining top and bottom Y coordinates for each constituent video window, as well as determining left and right X coordinates for each constituent video window. The method also includes dividing each constituent video window using the top and bottom Y coordinates to obtain Y-divided sub-windows, dividing each Y-divided sub-window using left and right X coordinates to obtain X and Y divided sub-windows, and storing X, Y coordinates of opposing corners of each X and Y divided sub-window in the storage device. | 12-05-2013 |
20140071348 | METHOD FOR CONVERTING INPUT IMAGE DATA INTO OUTPUT IMAGE DATA, IMAGE CONVERSION UNIT FOR CONVERTING INPUT IMAGE DATA INTO OUTPUT IMAGE DATA, IMAGE PROCESSING APPARATUS, DISPLAY DEVICE - In a method, unit and display device, the input image signal is split into a regional contrast signal and a detail signal, followed by stretching separately the dynamic ranges for at least one of the signals. The dynamic range for the regional contrast signal is stretched with a higher stretch ratio than the dynamic range for the detail signal. The stretch ratio for the detail signal may be near 1 or 1. Further, highlights are identified, and for the highlights the dynamic range is stretched to an even higher degree than for the regional contrast signal | 03-13-2014 |
20140078401 | DISTRIBUTION AND USE OF VIDEO STATISTICS FOR CLOUD-BASED VIDEO ENCODING - A method for processing a video stream includes receiving first and second copies of the video stream by first and second video processing devices, respectively, and generating first and second statistical data for the video stream by the first and the second video processing devices, respectively. The method further includes transmitting in first and second transmissions the first and the second copies of the video stream with the first and the second statistical data respectively from the first and the second video processing device to a third video processing device, and reading the first and the second statistical data from the first and the second transmissions by the third video processing device. The method further includes combining the first and the second statistical data with one copy of the video stream by the third video processing device, and transmitting the one copy of the video stream with the first and the second statistical data. | 03-20-2014 |
20140085542 | Method for embedding and displaying objects and information into selectable region of digital and electronic and broadcast media - The invention is a method of embedding objects and information in a transparent layer or medium situated seamlessly on the top of media like a movie or a still image. The media contains targeted visual element viewed or edited by users. The selectable region for embedding objects and information in the transparent layer or medium is defined by the location of the visual element in the movie or still image and by the movie elapsed time in case of playing a video media content. The embedded objects and information proper to the targeted visual element can be recalled and re-displayed on electronic and digital devices upon user actions such as but not limited to a click, tap or a mouse-over on the transparent layer in a specific area that is overlapping or surrounding the targeted visual element contained in the media content. | 03-27-2014 |
20140085543 | SYSTEM AND METHOD FOR COMPILING AND PLAYING A MULTI-CHANNEL VIDEO - A system and method for compiling video segments including defining an event; providing a multi-user video aggregation interface; receiving a plurality of video segments through the aggregation interface; determining event-synchronized alignment of the plurality of videos; and assembling a multi-channel video of event, the multi-channel video file configured with at least two video segments that have at least partially overlapping event-synchronized alignment. | 03-27-2014 |
20140253804 | IMAGE PROCESSING DEVICE, IMAGE RECOGNITION DEVICE, IMAGE RECOGNITION METHOD, AND PROGRAM - There is provided an image processing device including an image insertion unit that inserts into video content an image for recognition identified by image recognition. The image insertion unit inserts the image for recognition so that a display duration of the image for recognition is less than a value near a threshold of visual perception. | 09-11-2014 |
20140362297 | METHOD AND APPARATUS FOR DYNAMIC PRESENTATION OF COMPOSITE MEDIA - The system provides a method and apparatus for constructing, and for dynamically rearranging the order of content in a composite video. The re-ordering of clips in the composite video can be based on one or more weighting factors associated with each clip. These factors can include freshness or newness of the clip, popularity based on the number of “likes” of a clip by others, the content of the clip (e.g. celebrity creator or presence), paid boosting (e.g. for commercial concerns); and other factors. Each clip has associated metadata that can be used to assign a weight value to the clip for purposes of reordering the composite video. | 12-11-2014 |
20140368739 | OBJECT-BASED AUDIO-VISUAL TERMINAL AND BITSTREAM STRUCTURE - As information to be processed at an object-based video or audio-visual (AV) terminal, an object-oriented bitstream includes objects, composition information, and scene demarcation information. Such bitstream structure allows on-line editing, e.g. cut and paste, insertion/deletion, grouping, and special effects. In the interest of ease of editing, AV objects and their composition information are transmitted or accessed on separate logical channels (LCs). Objects which have a lifetime in the decoder beyond their initial presentation time are cached for reuse until a selected expiration time. | 12-18-2014 |
20150092109 | Video Stitching System and Method - A method and computing system for receiving a first video file containing a first plurality of video frames. A second video file containing a second plurality of video frames is received. The video files are processed to identify at least one non-graphical temporal alignment object included in each of the video files. The video files are temporally aligned using the at least one non-graphical temporal alignment object to produce temporally-aligned video files. | 04-02-2015 |
20180027175 | DISPLAY DEVICE, METHOD OF CONTROLLING THEREOF AND DISPLAY SYSTEM | 01-25-2018 |