26th week of 2016 patent applcation highlights part 85 |
Patent application number | Title | Published |
20160191859 | OCCUPANT MONITORING SYSTEMS AND METHODS - Various implementations include an occupant monitoring system (OMS) for monitoring at least one occupant in a vehicle. The OMS includes a mounting bracket coupled to a rotatable portion of a steering wheel assembly, a housing coupled to the mounting bracket, at least one imaging unit disposed within the housing, and at least one lens disposed adjacent the housing and imaging unit and facing an expected position of at least one vehicle occupant and configured for allowing light to pass through to the imaging unit. The mounting bracket extends upwardly from the steering wheel assembly adjacent a central portion thereof. The housing has a back surface that is disposed adjacent a back cover of the steering wheel assembly. The imaging unit has a field of view inside of the vehicle and is configured to capture an image signal corresponding to an imaging area within the field of view. | 2016-06-30 |
20160191860 | APPARATUS AND METHOD FOR DISPLAYING SURVEILLANCE AREA OF CAMERA - The apparatus for displaying a surveillance area includes: a data receiving unit receiving position information of a camera and data on a photographed image of the camera; a panorama image generating unit generating a panorama image for surrounding of a position of the camera using a surrounding image of the position of the camera; a matching information calculating unit calculating first matching information between the panorama image and the photographed image of the camera and second matching information between a map associated with the panorama image and a geographic information system map; a surveillance area estimating unit estimating the surveillance area of the camera on the basis of the first matching information and the second matching information; and a surveillance area displaying unit displaying the estimated surveillance area on the geographic information system map. | 2016-06-30 |
20160191861 | REMOTE VEHICLE CONTROL AND OPERATION - A plurality of data streams are received from a vehicle, at least one of the data streams including multimedia data. A prioritization of the data streams is performed according to one or more factors. At least one of adjusting at least one of the data streams and preventing transmission of at least one of the data streams according to the prioritization is performed. | 2016-06-30 |
20160191862 | WEARABLE CAMERA - A capture, a storage that stores data of a video image captured by the capture, an attribute information assigning switch that inputs an operation of assigning attribute information related to the data of a video image, and an attribute selecting switch that inputs an operation of selecting the attribute information are included. When there is an input from the attribute information assigning switch, the attribute information associated with a setting state of the attribute selecting switch is stored in the storage by being assigned to data of a video image. | 2016-06-30 |
20160191863 | REAR VEHICLE CAMERA - A rearview camera assembly includes a lens unit including a lens barrel having a sensor lens therein and a light sensor unit. The light sensor has a printed circuit board having a light sensor chip mounted thereon, a rear housing supporting the printed circuit board on a first side thereof, and a front housing coupled with the rear housing and supporting the printed circuit board on a second side thereof. The front housing defines a lens holder, and the lens barrel is received within the lens holder of the rear housing so as to align with the light sensor. | 2016-06-30 |
20160191864 | WIRELESS ENTRANCE COMMUNICATION DEVICE - A device for communicating including a housing including a camera, a microphone, a speaker, a button, a battery, a sensor, non-volatile memory, a processor, and a wireless communications module, wherein the non-volatile memory stores code operable by the processor for switching the processor from low-power mode to active mode in response to an activation trigger, receiving, from the one of the microphone and the camera, outbound audio and video signals, then sending a signal to a server via the wireless communications module during active mode, the signal including one or more of an alert signal, a signal based on the outbound audio signal, and a signal based on the outbound video signal, receiving from the server an inbound audio signal and outputting a signal based on the inbound audio signal via the speaker, and switching the processor from active mode to low-power mode in response to a deactivation trigger. | 2016-06-30 |
20160191865 | SYSTEM AND METHOD FOR ESTIMATING AN EXPECTED WAITING TIME FOR A PERSON ENTERING A QUEUE - A system and method for estimating an expected waiting time for a person entering a queue may receive image data captured from at least one image capture device during a period of time prior to the person entering the queue; calculate, based on the image data, one or more prior waiting time estimations, a queue handling time estimation, and a queue occupancy; assign a module weight to each of the one or more prior waiting time estimations and to the queue handling time estimation; generate, based on at least the calculations of the one or more prior waiting time estimations, the queue handling time estimation, and the respective module weights, a recent average handling time for the prior period of time; and determine the expected waiting time based on the recent average handling time and the queue occupancy. | 2016-06-30 |
20160191866 | METHOD FOR DRIVING IMAGE PICKUP APPARATUS, AND SIGNAL PROCESSING METHOD - A method for driving an image pickup apparatus is a method that generates image data using signals output by R, G, and B pixels contained in n frames, where n is an integer greater than or equal to two, and a signal or signals output by a W pixel contained in m frame or frames, where m is smaller than n, or that generates image data using signals output by R and B pixels contained in n frames and a signal or signals output by a G pixel contained in m frame or frames. | 2016-06-30 |
20160191867 | STRUCTURED LIGHT PROJECTOR - There is provided according to aspects of the present disclosure a structured light projector, a method of projecting a structured light pattern, and a depth sensing device. The projector includes an emitters array and a mask. The emitters array includes a plurality of individual light emitters. The light from each of the individual emitters diverges. The emitters array has a spatial intensity profile that is associated with the light divergence output of the array's individual emitters. The mask is designed to provide a structured light pattern when illuminated, and is positioned at a distance relative to the emitters array where rays from adjacent emitters overlap, and where such overlaps provide uniform light intensity distribution across the mask plane. | 2016-06-30 |
20160191868 | PROJECTION DEVICE - A projection device includes a detection unit configured to detect a specific object, a projection unit configured to project a projection image indicated by a video signal, a drive unit configured to change the direction of the projection unit so as to change a projection position of the projection image, and a controller configured to control the drive unit. The controller controls the drive unit such that the drive unit performs a first control in which the projection image is projected at a position following the motion of the specific object detected by the detection unit and a second control in which the projection position of the projection image is changed according to the video signal. | 2016-06-30 |
20160191869 | SHARED-PATH ILLUMINATION AND EXCITATION OPTICS APPARATUS AND SYSTEMS - A Gaussian-distributed excitation light beam of an excitation spectrum emitted from an excitation light source enters a light pipe and is there converted to a top-hat spatially distributed excitation beam. The top-hat distributed excitation beam is focused on a phosphor-coated or reflective portion of a surface of an optical wavelength conversion element. Fluoresced and reflected beams travel outward from the wavelength conversion element and re-enter the light pipe to be homogenized during transit through the light pipe. A homogenized fluoresced or reflected beam is relayed to an output as one of a sequence of colors of homogenized light. The functions of Gaussian to top-hat conversion of the excitation beams directed toward the optical conversion element and homogenization of beams directed outward from the optical conversion element are both efficiently performed using a single, shared light pipe. | 2016-06-30 |
20160191870 | Scanning Beam Display System - A scanning beam display system includes an optical module, an image control module, and a display screen on which optical beams are scanned. The optical module includes a vertical adjuster placed in the optical paths of the beams to control and adjust positions of the optical beams along a generally vertical direction on the display screen, and a control unit configured to receive control instructions for the vertical adjuster and to control the vertical adjuster to be at one of a predetermined number of orientations to place the scanning optical beams at a corresponding distinct position on the display screen. The control unit is further configured to apply an adjustment offset to each orientation of the vertical adjuster such that each immediately vertically adjacent pair of beam footprints projected on the display screen resulting from the plurality of positions have a vertical overlap that is larger than a first threshold. | 2016-06-30 |
20160191871 | INFORMATION PRESENTATION DEVICE - An object of the present invention is to provide a device that is capable of presenting image information that is larger than an object by irradiating light onto the object that moves on a trajectory that is not known. An object tracking section controls line of sight direction so as to be directed towards a moving object. A rendering section irradiates a light beam in a direction along the line of sight direction. In this way the rendering section can irradiate the light beam onto the surface of the object. It is possible to present information, that has been rendered in a range that is larger than the surface area of the object, to an observer, utilizing an after image of the light beam that has been irradiated on the surface of the object. | 2016-06-30 |
20160191872 | PROJECTION SYSTEM AND PROJECTION METHOD THEREOF - A projection system and a projection method thereof are provided. The projection system includes a projection apparatus and a portable electronic apparatus. The projection apparatus includes a first communication unit, a projection unit and a first control unit. The first communication unit establishes a communication connection with the portable electronic apparatus to receive an identification signal from the portable electronic apparatus. The identification signal includes predetermined projection setup information. The first control unit controls the projection unit to project a image according to the predetermined projection setup information corresponding to the identification signal. | 2016-06-30 |
20160191873 | PROJECTION DEVICE, AND PROJECTION METHOD - A projection device of the present disclosure includes a projection unit for radiating on a projection target object in a time-divided manner, in one frame period of a video, a visible light image that is based on a video signal and an invisible light image for acquiring predetermined information, and a controller for controlling an order of a radiation period for the visible light image and a radiation period for the invisible light image in the one frame period. The controller includes, as a radiation mode, a first mode for radiating the invisible light image in a manner of distributing the invisible light image over a plurality of periods within the one frame period. | 2016-06-30 |
20160191874 | PROJECTION DEVICE - A projection device includes: a plate shaped member that is thermally conductive; a light emitting element that is disposed upon the plate shaped member and emits light; a bending member that bends light from the light emitting element into an orientation parallel to the plate shaped member; a modulation element that modulates light bent by the bending member; and a polarized light separation element that bends light modulated by the modulation element into an orientation going away from the plate shaped member. | 2016-06-30 |
20160191875 | IMAGE PROJECTION APPARATUS, AND SYSTEM EMPLOYING INTERACTIVE INPUT-OUTPUT CAPABILITY - An image projection apparatus includes an image receiver; an image processor to convert an image signal to a projection signal; a projection unit to project a projection image on a projection screen; a coordinate calculator to calculate coordinates of a point on the projection screen when the point is identified by a coordinate input device; an interactive communication unit to communicate with the coordinate input device, and an external apparatus; a user direction detector to detect a user direction where a user that performs an operation of inputting the coordinates of the point exists based on the coordinates of the point input by the coordinate input device and coordinates defining an area of the projection screen; and a screen rotation determination unit to determine whether the user is performing the inputting operation and the projection screen is to be rotated based on the detected user direction. | 2016-06-30 |
20160191876 | Information Processing Method and Electronic Device - An information processing method applied to an electronic device having a projection assembly and a position-orientation sensor configured to detect position-orientation information of the electronic device is described. The method includes determining whether the electronic device is in a projection mode to obtain a first determination result; acquiring position-orientation information of the electronic device as detected by the position-orientation sensor when the first determination result indicates that the electronic device is in a projection mode; generating an adjustment line based on the position-orientation information; acquiring a preset reference line; and displaying the adjustment line and the reference line in a projected screen, to assist a user in adjusting the adjustment line to coincide with the reference line. | 2016-06-30 |
20160191877 | PROJECTOR DEVICE AND PROJECTION METHOD - The present disclosure can provide a projector device which includes: a distance sensor which measures a distance to a facing object, a detector which detects a specific target object and a projection plane which is in contact with the target object, based on distance information output from the distance sensor, a projection region determination unit which specifies a contact portion at which the target object and the projection plane are in contact, and determines a region on which a video image which is associated with the target object can be projected, in the projection plane based on the specified contact portion, and a projector which projects the video image to the region, and which can reliably project video images in a region having an intended positional relationship with a specific object. | 2016-06-30 |
20160191878 | IMAGE PROJECTION DEVICE - An image projection device according to the present disclosure includes a projection optical unit, a projector for projecting an image that is based on a video signal on a projection target object through the projection optical unit, a detector for detecting a predetermined object from the image indicated by the video signal, and a controller for specifying, based on a position of the detected object in the image, a projection position of the object, and controlling the projection optical unit based on the projection position. | 2016-06-30 |
20160191879 | SYSTEM AND METHOD FOR INTERACTIVE PROJECTION - An interactive projection system and method comprising a camera/projector unit having a computer processor connected via a network to a content server and content database. The system projects interactive trigger areas on a three-dimensional object. Specific content stored on the content server or locally in the memory of the computer processor is associated with each trigger area. A user interacts with the trigger areas and the system projects informational or entertainment content about the object on the surface of the object. | 2016-06-30 |
20160191880 | Systems and Methods for Light Field Modeling Techniques for Multi-Modulation Displays - Dual and multi-modulator projector display systems and techniques are disclosed. In one embodiment, a projector display system comprises a light source; a controller, a first modulator, receiving light from the light source and rendering a halftone image of said the input image; a blurring optical system that blurs said halftone image with a Point Spread Function (PSF); and a second modulator receiving the blurred halftone image and rendering a pulse width modulated image which may be projected to form the desired screen image. Systems and techniques for forming a binary halftone image from input image, correcting for misalignment between the first and second modulators and calibrating the projector system—e.g. over time—for continuous image improvement are also disclosed. | 2016-06-30 |
20160191881 | IMAGE CAPTURING METHOD - A first line passing through a coordinate point which represents the light-source color of ambient light is drawn so as to cross a line segment connecting two coordinate points which points respectively represent colors of light emitted from the two light-emitting sources. An image of a subject is captured while light is emitted from the two light-emitting sources toward the subject in such a manner that the volumes of light emitted from the two light-emitting sources respectively correspond to those represented by a predetermined point which is, or close to, the intersection point of the first line and the line segment on the basis of the intersection point and light volume information. A White Balance adjustment correction value is obtained on the basis of light volume information with respect to the two light-emitting sources, the predetermined point, and the point representing the light-source color. | 2016-06-30 |
20160191882 | DIAGNOSIS SUPPORT APPARATUS FOR LESION, IMAGE PROCESSING METHOD IN THE SAME APPARATUS, AND MEDIUM STORING PROGRAM ASSOCIATED WITH THE SAME METHOD - First extracting means of a processing unit, based on a brightness component and a color information component of a captured image separated by separating means, extract a candidate region using a first morphology processing based on the brightness component, and second extracting means of the processing unit extract a likelihood of a region from a color space composed of the brightness component and the color information component and perform a second morphology processing to generate a region-extracted image, which is displayed on the display device. In this case, the morphology processing including the smoothing filter processing is performed on an extracted candidate region and an extracted likelihood of the region. | 2016-06-30 |
20160191883 | REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING MEDIUM - The present technology relates to a reproduction device, a reproduction method, and a recording medium capable of displaying graphics with a broader dynamic range of luminance and appropriate brightness. | 2016-06-30 |
20160191884 | IMAGE PLAYBACK METHOD AND IMAGE SURVEILLANCE SYSTEM THEREOF - An image playback method is applied to an image surveillance system. The image surveillance system includes a plurality of image capturing devices and a display monitor. The image playback method includes receiving image data captured by each image capturing device, setting a first score of each image data according to a content of each image data, setting a second score of each image data according to a geographical location where each image capturing device captures the image data, and performing selective playback of each image data on the display monitor according to the first score and the second score of each image data. | 2016-06-30 |
20160191885 | SOURCE DATA ADAPTATION AND RENDERING - The invention relates to a method for source data adaptation and rendering. The method comprises receiving source data; processing the source data to determine rendering parameters; wherein the processing comprises obtaining processing-free temporal segments either by applying wide angles at periodic intervals to the source data or rendering a predetermined region of the source data; determining content characteristics of a visual frame; and utilizing content characteristics for controlling the obtained processing-free temporal segments; signaling the rendering parameters for playback; and adapting the rendering parameters to render the processing-free temporal segments from the source content. | 2016-06-30 |
20160191886 | DATA REPRODUCING APPARATUS - A data reproducing apparatus that reproduces moving image data that is captured while a vehicle is traveling includes a controller configured to: derive a stoppable distance that the vehicle requires to come to a stop based on a speed of the vehicle obtained while the vehicle was traveling during an image-capturing time period in which the moving image data was captured; derive an inter-vehicle distance between the vehicle and a preceding vehicle located ahead of the vehicle during the image-capturing time period in which the moving image data was captured; and superimpose, on an object image generated from the moving image data, a first mark which shows a position corresponding to the stoppable distance and a second mark which shows a position corresponding to the inter-vehicle distance. | 2016-06-30 |
20160191887 | IMAGE-GUIDED SURGERY WITH SURFACE RECONSTRUCTION AND AUGMENTED REALITY VISUALIZATION - Embodiments disclose a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon, which employs real-time three-dimensional surface reconstruction for preoperative and intraoperative image registration. Stereoscopic cameras provide real-time images of the scene including the patient. A stereoscopic video display is used by the surgeon, who sees a graphical representation of the preoperative or intraoperative images blended with the video images in a stereoscopic manner through a see through display. | 2016-06-30 |
20160191888 | DEVICE, METHOD AND COMPUTER PROGRAM FOR 3D RENDERING - The present disclosure improves | 2016-06-30 |
20160191889 | STEREO VISION SOC AND PROCESSING METHOD THEREOF - A stereo vision SoC and a processing method thereof are provided. The stereo vision SoC extracts first support points from an image and adds second support points, performs triangulation based on the first support points and the second support points; and extracts disparity using a result of the triangulation. Accordingly, depth image quality is improved and HW is easily implemented in the stereo vision SoC. | 2016-06-30 |
20160191890 | IMAGE PROCESSING APPARATUS - In an image processing apparatus, an image acquiring unit acquires a first image and a second image that form stereoscopic images. A first sub-image extracting unit extracts first sub-images from the first image. A second sub-image extracting unit extracts second sub-images from the second image. A matching unit matches each pair of the first and second sub-images to determine a degree of similarity therebetween. A similar sub-image setting unit sets the second sub-image having a highest degree of similarity to the first sub-image to be a similar sub-image to the first sub-image. A brightness comparing unit compares in brightness each pair of the first and second sub-images. The matching unit is configured to, if a result of comparison made by the brightness comparing unit between a pair of the first and second sub-images is out of a predetermined brightness range, exclude such a pair of the first and second sub-images. | 2016-06-30 |
20160191891 | VIDEO CAPTURING AND FORMATTING SYSTEM - A video capture and formatting system including a plurality of lenses, a digital sensor, and an editing unit. The lenses capture light from a defined viewable area and are curvilinear in nature. The sensor receives the light from the lenses and senses the color, exposure, and travel distance of each photon. The sensor generates image data representing the visual characteristics of the image and the photon travel distance for each pixel. The editing unit processes image data from the sensor and blends the images from each lens together by matching the photon distance around the sides of each lens image data. The editing unit forms a master format having a two dimensional equi-rectangular image with depth perception provided from the photon travel distance of each pixel. | 2016-06-30 |
20160191892 | METHOD FOR TRANSMITTING AND RECEIVING STEREO INFORMATION ABOUT A VIEWED SPACE - The invention relates to stereoscopic television. Technical result of it is an increase in accuracy with which the transmission of stereoscopic video images is controlled as a result of the automatic real-time measurement of the physical space being photographed. In the present method, stereo photography is carried out by a symmetrically centered multi-angle stereo system with synchronized video cameras, video signals in adjacent lines are recorded and compared, angle signals adjacent to the central signal are detected in said lines, the temporal parallaxes of said signals are measured in a single temporal framework, the parallax signals are synchronized with the video signal of the central video camera, a signal stream is transmitted to the receiving end and recorded, the video signals of the stereoscopic angle shots are reproduced by shifting elements of the signals of the central camera to adjacent temporal parallaxes, and the image is represented. | 2016-06-30 |
20160191893 | IMMERSIVE VIRTUAL REALITY PRODUCTION AND PLAYBACK FOR STORYTELLING CONTENT - Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects. | 2016-06-30 |
20160191894 | IMAGE PROCESSING APPARATUS THAT GENERATES STEREOSCOPIC PRINT DATA, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - An image processing apparatus that is capable of generating stereoscopic print data which gives a printing result expressing a feeling of lively movement of a moving body. The image processing apparatus acquires a first image and a second image, which are photographed at different times, and each include an object image, and generates stereoscopic print data for use in stereoscopic print processing of the acquired second image. Distance information associated with the object image included in each of the first image and the second image is acquired. A movement vector of the object image included in the second image is calculated based on the first image and the second image. The stereoscopic print data is generated based on the second image, the distance information associated with the object image included in the second image, and the movement vector. | 2016-06-30 |
20160191895 | SUPER MULTI-VIEW IMAGE SYSTEM AND DRIVING METHOD THEREOF - There are provided a super multi-view image system and a driving method thereof, which can distribute and transmit a super multi-view image. A super multi-view image system includes an image bit stream generating unit for generating bit stream data of a super multi-view image, a storing/transmitting unit for distributing and storing image data generated by dividing the bit stream data in a plurality of storage servers, and a receiving/displaying unit for implementing an image by using image data transmitted from the storing/transmitting unit. In the super multi-view image system, the storing/transmitting unit simultaneously transmits, to the receiving/displaying unit, the image data distributed and stored in the plurality of storage servers. | 2016-06-30 |
20160191896 | EXPOSURE COMPUTATION VIA DEPTH-BASED COMPUTATIONAL PHOTOGRAPHY - A method and electronic information handling system provide recording a first image of a scene at a first exposure level using a three-dimensional (3D) camera, correlating distances from the 3D camera and exposure levels over a plurality of image elements of the first image, selecting an exposure parameter value for at least one of the plurality of image elements having a z-distance value falling within a range of z-distance values, recording a second image of the scene according to the exposure parameter value, and constructing a composite image based on at least a portion of the second image for the at least one of the plurality of image elements. | 2016-06-30 |
20160191897 | INTEGRATED EXTENDED DEPTH OF FIELD (EDOF) AND LIGHT FIELD PHOTOGRAPHY - A method for light field acquisition, comprising: acquiring an image by an optical arrangement with chromatic aberrations; separating the image to a plurality of monochromatic images, each having a different color and a different point of focus according to the acquiring; transporting sharpness from each of the plurality of monochromatic images to others of the plurality of monochromatic images to construct a plurality of color images, each color image of the plurality of color images having a different point of focus; and reconstructing a light field by combining the plurality of color images. | 2016-06-30 |
20160191898 | Image Processing Method and Electronic Device - An image processing method is applied to an electronic device having a binocular camera that includes a first camera and a second camera. The method includes acquiring at least one first image taken by the first camera of the binocular camera and at least one second image taken by the second camera of the binocular camera; acquiring depth images in scenes of the at least one first image and the at least one second image; differentiating, based on the depth images, foregrounds and backgrounds in the scenes of the at least one first image and the at least one second image; and matching and stitching the foregrounds of the at least one first image and the at least one second image, and matching and stitching the backgrounds of the at least one first image and the at least one second image, so as to obtain a stitched third image. | 2016-06-30 |
20160191899 | Imaging Module, Stereo Camera for Vehicle, and Light Shielding Member for Imaging Module - Provided is a highly reliable imaging module that can easily carry out optical axis adjustments and focus adjustments while assuring required light shielding performance, and in which malfunctions do not easily occur in solder junctions and the like for imaging elements even under severe temperature environments. An imaging module has, in addition to a lens holding member ( | 2016-06-30 |
20160191900 | APPARATUS FOR GENERATION OF THREE-DIMENSIONAL DIGITAL DATA REPRESENTATIVE OF HEAD SHAPES - Apparatus is provided to capture three-dimensional images of a subject's head. The apparatus comprises a plurality of stereographic digital cameras that are operable simultaneously and are disposed in a predetermined vertical planar relationship to each other. The plurality of stereographic digital cameras are positioned to capture a group of stereographic digital image pairs of a corresponding vertical hemispherical surface portion of the head of the subject when the subject is positioned in a predetermined location in front of the plurality of stereographic digital cameras. The apparatus further comprises a processing apparatus coupled to the plurality of stereographic digital cameras. The processing apparatus operates on the group of stereographic digital image pairs to generate a three-dimensional digital image file of at least a full vertical hemispheric portion of the head of the subject. | 2016-06-30 |
20160191901 | 3D IMAGE CAPTURE APPARATUS WITH COVER WINDOW FIDUCIALS FOR CALIBRATION - A 3D imaging apparatus with enhanced depth of field to obtain electronic images of an object for use in generating a 3D digital model of the object. The apparatus includes a housing having mirrors positioned to receive an image from an object external to the housing and provide the image to an image sensor. The optical path between the object and the image sensor includes an aperture element having apertures for providing the image along multiple optical channels with a lens positioned within each of the optical channels. The apparatus also includes a transparent cover positioned within the optical path and having a plurality of fiducials. The depth of field of the apparatus includes the cover, allowing the fiducials to be used to calibrate the apparatus or verify and correct the existing calibration of it. | 2016-06-30 |
20160191902 | CAMERA TRACKER TARGET USER INTERFACE FOR PLANE DETECTION AND OBJECT CREATION - One exemplary embodiment involves identifying a plane defined by a plurality of three-dimensional (3D) track points rendered on a two-dimensional (2D) display, wherein the 3D track points are rendered at a plurality of corresponding locations of a video frame. The embodiment also involves displaying a target marker at the plane defined by the 3D track points to allow for visualization of the plane, wherein the target marker is displayed at an angle that corresponds with an angle of the plane. Additionally, the embodiment involves inserting a 3D object at a location in the plane defined by the 3D track points to be embedded into the video frame. The location of the 3D object is based at least in part on the target marker. | 2016-06-30 |
20160191903 | THREE-DIMENSIONAL IMAGE GENERATION METHOD AND APPARATUS - A method and apparatus of generating a three-dimensional (3D) image are provided. The method of generating a 3D image involves acquiring a plurality of images of a 3D object with a camera, calculating pose information of the plurality of images based on pose data for each of the plurality of images measured by an inertial measurement unit, and generating a 3D image corresponding to the 3D object based on the pose information. | 2016-06-30 |
20160191904 | Autostereoscopic 3D Display Device - An autostereoscopic 3D display device according to the present disclosure may be configured to set the width of a viewing zone to a proper fraction of the interocular distance while at the same time overlapping the viewing zones with each other. Through this, an autostereoscopic 3D display device allowing the input and output of stereo image data may be implemented to reduce the number of image sources as well as reduce 3D crosstalk. In addition, the present disclosure may apply view data rendering to extend a 3D viewing zone. | 2016-06-30 |
20160191905 | Adaptive Image Acquisition and Display Using Multi-focal Display - Multiframe reconstruction combines a set of acquired images into a reconstructed image. Here, which images to acquire are selected based at least in part on the content of previously acquired images. In one approach, a set of at least three images of an object are acquired at different acquisition settings. For at least one of the images in the set, the acquisition setting for the image is determined based at least in part on the content of previously acquired images. Multiframe image reconstruction, preferably via a multi-focal display, is applied to the set of acquired images to synthesize a reconstructed image of the object. | 2016-06-30 |
20160191906 | WIDE-ANGLE AUTOSTEREOSCOPIC THREE-DIMENSIONAL (3D) IMAGE DISPLAY METHOD AND DEVICE - A wide-angle autostereoscopic three-dimensional (3D) image display method is provided. The method includes tracking a user in an autostereoscopic 3D image viewing state with respect to a display panel including a plurality of display units, obtaining a first distance between the display panel and a spectroscopic unit array including a plurality of spectroscopic units and a first width associated with each spectroscopic unit, and determining a second distance between a viewing position of the user and the display panel. The method also includes determining a width of a display unit combination corresponding to the second distance. Further, the method includes calculating a gray value of each display unit based on the width of the display unit combination and a reference value of the distance between each display unit and an edge of the display unit combination and displaying the 3D image on the display panel. | 2016-06-30 |
20160191907 | Method, System, and Computer Program Product for Controlling Stereo Glasses - A method, system, and computer program product are provided for controlling stereo glasses. Left and right eye shutters of stereo glasses are controlled to switch between closed and open orientations and simultaneously remain in a fast switching orientation for a predetermined amount of time between the closed and open orientations. The fast switching orientation of each of the left and right eye shutters has at least one open time and at least one closed time. A duration from a first one of the at least one open time and the at least one closed time through a last one of the at least one open time and at least one closed time is equal to the predetermined mount of time. Each of the at least one open time of the fast switching orientation of each of the left and right eye shutters is shorter than or equal to 1/24 seconds. | 2016-06-30 |
20160191908 | THREE-DIMENSIONAL IMAGE DISPLAY SYSTEM AND DISPLAY METHOD - A display method includes the following steps: generating a first depth map and an edge map according to a color image; determining a first depth value of a first pixel and a second depth value of a second pixel of an edge region of the first depth map according to the edge map, in which the first pixel and the second pixel are arranged in a horizontal direction; adjusting N depth values of N pixels adjacent to the edge region of the first depth map according to a zero parallax setting reference level to generate a second depth map, where N is a positive integer, and the N pixels include at least one of the first pixel and the second pixel; and generating multiple-view images according to the second depth map and the color image to display a 3D image. | 2016-06-30 |
20160191909 | AUTOSTEREOSCOPIC THREE-DIMENSIONAL (3D) DISPLAY DEVICE - An autostereoscopic three-dimensional (3D) display device and a method for an autostereoscopic 3D display device is provided. The display device includes a tracking device configured to track a user in an autostereoscopic 3D viewing state, a display panel coupled with a light splitting device for 3D displaying, a display driving circuit for driving the light splitting device; a 3D image display controlling chip configured to store hardware parameters of the autostereoscopic 3D display device and to control the display driving circuit to switch on/off the light splitting device, an application module configured to receive a 3D displaying request from an application program for displaying a 3D image on the display device, a tracking module configured to obtain position information of the user by the tracking device, a scheduling module configured to read hardware parameters of the autostereoscopic 3D display device and to calculate 3D image arrangement data, and a 3D image arrangement module configured to receive the 3D image arrangement data and to arrange an image required for 3D displaying. Further, the scheduling module causes the 3D image display controlling chip to switch on the light splitting device for the display device to enter a 3D display mode and display the arranged 3D image on the display device. | 2016-06-30 |
20160191910 | Gaze-contingent Display Technique - A gaze contingent display technique for providing a human viewer with an enhanced three-dimensional experience not requiring stereoscopic viewing aids. Methods are shown which allow users to view plenoptic still images or plenoptic videos incorporating gaze-contingent refocusing operations in order to enhance spatial perception. Methods are also shown which allow the use of embedded markers in a plenoptic video feed signifying a change of scene incorporating initial depth plane settings for each such scene. Methods are also introduced which allow a novel mode of transitioning between different depth planes wherein the user's experience is optimized in such a way that these transitions trick the human eye into perceiving enhanced depth. This disclosure also introduces a system of a display device which comprises gaze-contingent refocusing capability in such a way that depth perception by the user is significantly enhanced compared to prior art. This disclosure comprises a nontransitory computer-readable medium on which are stored program instructions that, when executed by a processor, cause the processor to perform operations relating to timing the duration the user's gaze is fixated on each of a plurality of depth planes and making a refocusing operation contingent of a number of parameters. | 2016-06-30 |
20160191911 | SYSTEM AND METHOD FOR CALIBRATING A VISION SYSTEM WITH RESPECT TO A TOUCH PROBE - This invention provides a calibration fixture that enables more accurate calibration of a touch probe on, for example, a CMM, with respect to the camera. The camera is mounted so that its optical axis is approximately or substantially parallel with the z-axis of the probe. The probe and workpiece are in relative motion, along a plane defined by orthogonal x and y axes, and optionally the z-axis and/or and rotation R about the z-axis. The calibration fixture is arranged to image from beneath the touch surface of the probe and, via a 180-degree prism structure, to transmit light from the probe touch point along the optical axis to the camera. Alternatively, two cameras respectively view the fiducial location relative to the CMM arm and the probe location when aligned on the fiducial. The fixture can define an integrated assembly with an optics block and a camera assembly. | 2016-06-30 |
20160191912 | HOME OCCUPANCY SIMULATION MODE SELECTION AND IMPLEMENTATION - Enabling an end-user to configure or customize a home occupancy simulation mode, which is implemented by a home automation system, via one or more user interfaces, some of which might normally be used to access satellite television-related programming and services. | 2016-06-30 |
20160191913 | PROGRAMMING DISRUPTION DIAGNOSTICS - Various arrangements for monitoring the signal strength of a programming stream are presented. A tuner in a receiver may be allocated to monitor disruptions and changes to the signal strength of programming streams. Based on the characteristics of the signal strength and changes to additional data such as weather information, a cause of the disruption may be determined. The cause of the disruption may be communicated to a user. Characteristics of the disruption may be unique or localized to specific geographic locations. Based at least in part on the geographic location of a receiver characteristics of disruptions may be used to identify television receivers associated with account packing. | 2016-06-30 |
20160191914 | APPARATUS AND METHOD FOR CONTROLLING DISPLAY APPARATUS - A display control apparatus includes a data acquirer and a data processor. The data acquirer acquires a first image by capturing an image of a display apparatus that displays an image of a first pattern, and a second image by capturing an image of the display apparatus that displays an image of a second pattern. The data processor detects a display region that is an image area of the display apparatus, from a third image obtained based on the first image and the second image. | 2016-06-30 |
20160191915 | VIDEO ENCODING AND DECODING METHODS AND DEVICE USING SAME - The present invention relates to video encoding/decoding methods and device, wherein the video encoding method according to the invention comprises the following steps: acquiring information of peripheral blocks; setting the information about a current block based on the information of the peripheral blocks; and encoding the current block based on the set information, wherein the current block and the peripheral blocks may be a CU (coding unit). | 2016-06-30 |
20160191916 | HIGH FREQUENCY EMPHASIS IN DECODING OF ENCODED SIGNALS - A decoder adapted to generate an intermediate decoded version of a video frame from an encoded version of the video frame, determine either an amount of high frequency basis functions or coefficients below a quantization threshold for at least one block of the video frame, and generate a final decoded version of the video frame based at least in part on the intermediate decoded version of the video frame and the determined amount(s) for the one or more blocks of the video frame, is disclosed. In various embodiments, the decoder may be incorporated as a part of a video system. | 2016-06-30 |
20160191917 | METHOD AND SYSTEM OF ENTROPY CODING USING LOOK-UP TABLE BASED PROBABILITY UPDATING FOR VIDEO CODING - Techniques related to entropy coding with look-up-table based probability updating for video coding. | 2016-06-30 |
20160191918 | Methods and Systems for Estimating Entropy - It is a challenge task to conduct Entropy computation on the attributes of packet header in high-speed networks. Motivated by Ashwin Lall et al., we present a stream-based scheme to estimate to the entropy norm based on Count Sketch algorithm. The system is implemented on a NetFPGA-10G platform. It is capable of processing IP packets and computing the entropy in 30 Gbps line rate. | 2016-06-30 |
20160191919 | ARITHMETIC DECODING METHOD AND ARITHMETIC CODING METHOD - An arithmetic decoding method is a method in which a context variable specifying a probability of a possible value of each of elements included in a binary string corresponding to a value of a given variable is initialized and arithmetic decoding is performed, using the context variable. The method includes: determining, from among a plurality of initialization methods as a method of initializing the context variable, an initialization method corresponding to the given variable or a group which includes the given variable; and initializing the context variable using the determined initialization method. | 2016-06-30 |
20160191920 | METHOD AND APPARATUS FOR DETERMINING MERGE MODE - Provided are a method and apparatus for determining a merge mode by using motion information of a previous prediction unit. The method of determining a merge mode includes obtaining a merge mode cost of a lower depth based on a merge mode cost of a coding unit of an upper depth obtained by using motion information of a merge mode of the coding unit of the upper depth corresponding to a merge mode of the coding unit of the lower depth. | 2016-06-30 |
20160191921 | Method and Apparatus for Simplified Motion Vector Predictor Derivation - A method and apparatus for deriving a motion vector predictor (MVP) candidate set for motion vector coding of a current block. Embodiments according to the present invention determine a redundancy-removed spatial-temporal MVP candidate set. The redundancy-removed spatial-temporal MVP candidate set is derived from a spatial-temporal MVP candidate set by removing any redundant MVP candidate. The spatial-temporal MVP candidate set includes a top spatial MVP candidate, a left spatial MVP candidate and one temporal MVP candidate. The method further checks whether candidate number of the redundancy-removed spatial-temporal MVP candidate set is smaller than a threshold, and adds a zero motion vector to the redundancy-removed spatial-temporal MVP candidate set if the candidate number is smaller than the threshold. Finally, the method provides the redundancy-removed spatial-temporal MVP candidate set for encoding or decoding of the motion vector of a current block. | 2016-06-30 |
20160191922 | MIXED-LEVEL MULTI-CORE PARALLEL VIDEO DECODING SYSTEM - A method, apparatus and computer readable medium storing a corresponding computer program for decoding a video bitstream based on multiple decoder cores are disclosed. In one embodiment of the present invention, the method arranges multiple decoder cores to decode one or more frames from a video bitstream using mixed level parallel decoding. The multiple decoder cores are arranged into groups of multiple decoder cores for parallel decoding one or more frames by using one group of multiple decoder cores for said one or more frames, wherein each group of multiple decoder cores comprises one or more decoder cores. The number of frames to be decoded in the mixed level parallel decoding or which frames to be decoded in the mixed level parallel decoding is adaptively determined. | 2016-06-30 |
20160191923 | METHODS AND SYSTEMS FOR MASKING MULTIMEDIA DATA - Several methods and systems for masking multimedia data are disclosed. In an embodiment, a method for masking includes performing a prediction for at least one multimedia data block based on a prediction mode of a plurality of prediction modes. The at least one multimedia data block is associated with a region of interest (ROI). A residual multimedia data associated with the at least one multimedia data block is generated based on the prediction. A quantization of the residual multimedia data is performed based on a quantization parameter (QP) value. The QP value is variable such that varying the QP value controls a degree of masking of the ROI. | 2016-06-30 |
20160191924 | TRANSMISSION BIT-RATE CONTROL IN A VIDEO ENCODER - A video encoder receives a minimum number of bits (MIN) and a maximum number of bits (MAX) to be used to encode a segment of a sequence of image frames, the segment including a set of pictures contained in the sequence of image frames. The video encoder encodes the set of pictures using a total number of bits greater than the minimum number of bits (MIN), and not exceeding the maximum number of bits (MAX). Thus, the transmission bit-rate of the video encoder can be constrained to lie within a maximum and minimum rate. In an embodiment, the constraints are enforced over relatively short time intervals. | 2016-06-30 |
20160191925 | TECHNIQUES FOR IMAGE BITSTREAM PROCESSING - Various embodiments are generally directed to techniques for reducing processing and/or storage resource requirements for retrieving an image from an image bitstream. A device to display compressed images includes a parsing component to parse a comment block of stream data to locate an entry associated with an image of multiple images, the stream data comprising an image bitstream of the multiple images, and the entry comprising a pointer to a location within the image bitstream at which a block of coefficients of a minimum coded unit (MCU) of the image begins and an indication of a coefficient of the block of coefficients; and a decoding component to retrieve the MCU from the image bitstream and employ the indication to decode the block of coefficients. Other embodiments are described and claimed. | 2016-06-30 |
20160191926 | SIGNALING INDICATIONS AND CONSTRAINTS - This invention introduces modification to a syntax signaled in slice segment header related to inter-layer prediction. A syntax optimization is proposed where if syntax element NumActiveRefLayerPics is equal to syntax element NumDirectRefLayers[nuh_layer_id] then the inter_layer_pred_idc[i] syntax elements are not signaled. In this case the value of inter_layer_pred_idc[i] syntax elements is inferred based on other syntax elements already signaled. | 2016-06-30 |
20160191927 | SCALABLE VIDEO ENCODER/DECODER WITH DRIFT CONTROL - A system, method and computer-readable media are introduced that relate to data coding and decoding. A computing device encodes received data such as video data into a base layer of compressed video and an enhancement layer of compressed video. The computing device controls drift introduced into the base layer of the compressed video. The computing device, such as a scalable video coder, allows drift bay predicting the base layer from the enhancement layer information. The amount of drift is managed to improve overall compression efficiency. | 2016-06-30 |
20160191928 | IMAGE ENCODING AND DECODING METHOD SUPPORTING PLURALITY OF LAYERS AND APPARATUS USING SAME - An image decoding method supporting a plurality of layers according to the present invention may comprise the steps of: when an initial reference picture list of a current picture is configured, receiving flag information indicating whether reference picture set information of a reference layer to which the current picture refers is used; generating the initial reference picture list on the basis of the flag information; and predicting the current picture on the basis of the initial reference picture list. Accordingly, the present invention provides a method for generating a reference picture list including a picture of a layer, which is different from a layer to be currently encoded and decoded, and an apparatus using the same. | 2016-06-30 |
20160191929 | METHOD AND DEVICE FOR TRANSMITTING AND RECEIVING ADVANCED UHD BROADCASTING CONTENT IN DIGITAL BROADCASTING SYSTEM - The present invention relates to a method and a device for transmitting and receiving advanced UHD broadcasting content in a digital broadcasting system. The method for transmitting and receiving advanced UHD broadcasting content, according to one embodiment of the present invention, comprises the steps of: encoding data of a base layer; encoding data of one or more enhancement layers; encoding broadcast network program metadata including information on an advanced UHD broadcast program transmitted through a broadcast network, and encoding IP network program metadata including information on an advanced UHD broadcast program transmitted through an IP network; packetizing the encoded data of the base layer and/or the data of a first enhancement layer into a broadcast packet; packetizing the encoded data of a second enhancement layer into an IP packet; transmitting the broadcast packet through the broadcast network; and transmitting the IP packet through the IP network. | 2016-06-30 |
20160191930 | SCALABLE VIDEO CODING METHOD AND APPARATUS USING INTER PREDICTION MODE - The present invention relates to a scalable video coding method and apparatus using inter prediction mode. A decoding method includes determining motion information prediction mode on a target decoding block of an enhancement layer, predicting motion information on the target decoding block of the enhancement layer using motion information on the neighboring blocks of the enhancement layer, if the determined motion information prediction mode is a first mode, and predicting the motion information on the target decoding block of the enhancement layer using motion information on a corresponding block of a reference layer, if the determined motion information prediction mode is a second mode. | 2016-06-30 |
20160191931 | APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR VIDEO CODING AND DECODING - There is provided methods, apparatuses and computer program products for video coding and decoding. A first part of a first coded video representation is decoded, and information on decoding a second coded video representation is received and parsed. The coded second representation differs from the first coded video representation in chroma format, sample bit depth, color gamut and/or spatial resolution, and the information indicates if the second coded video representation may be decoded using processed decoded pictures of the first coded video representation as reference pictures. If the information indicates that the second coded video representation may be decoded using processed decoded pictures of the first coded video representation as a prediction reference, decoded picture(s) of the first part is/are processed into processed decoded picture(s) by resampling and/or sample value scaling; and decoding a second part of a second video representation using said processed decoded picture(s) as reference pictures. | 2016-06-30 |
20160191932 | IMAGE ENCODING METHOD, IMAGE DECODING METHOD, IMAGE ENCODING APPARATUS, AND IMAGE DECODING APPARATUS - An image encoding method includes: determining respective decoding times of a plurality of pictures included in a motion picture such that decoding times of a plurality of lower layer picture which do not belong to a highest layer of a plurality of layers are spaced at regular intervals, and decoding timing for each of the plurality of lower layer pictures is identical between a case where the plurality of encoded pictures included in the motion picture are decoded and a case where only the plurality of lower layer pictures are decoded, encoding each of the plurality of pictures included in the motion picture in accordance with the encoding order according to the determined respective decoding times, and generating an encoded stream including the plurality of encoded pictures and the determined respective decoding times for the plurality of pictures. | 2016-06-30 |
20160191933 | IMAGE DECODING DEVICE AND IMAGE CODING DEVICE - Provided is an image decoding device including layer dependency information decoding means for decoding dependent layer information that is information indicating a layer, different from a target layer, having the possibility of being referenced by the target layer, actual dependent layer information decoding means for decoding information that indicates a picture, of a layer different from the target layer, referenced by a target picture of the target layer, and reference picture set deriving means for generating at least an inter-layer reference picture set on the basis of the actual dependent layer information, in which the layer dependency information decoding means decodes an actual layer dependency flag that indicates whether a picture of each layer belonging to the dependent layer list is referenced by the target picture. | 2016-06-30 |
20160191934 | METHOD TO OPTIMIZE THE QUALITY OF VIDEO DELIVERED OVER A NETWORK - A system and method for transcoding data. The system includes an adaptive transcoder that transcodes data to produce transcoded data having a first data rate, and transmits the transcoded data to a client device. The adaptive transcoder receives a quality signal. The adaptive transcoder transcodes the data at a second data rate in response to the adaptive transcoder determining that the quality signal indicates that the first data rate is deficient based on at least one of processing capabilities of the client device and network connection capabilities between the adaptive transcoder and the client device. | 2016-06-30 |
20160191935 | METHOD AND SYSTEM WITH DATA REUSE IN INTER-FRAME LEVEL PARALLEL DECODING - A multi-core decoder system and an associated method use a decoding progress synchronizer to reduce bandwidth consumption for decoding a video bitstream is disclosed. In one embodiment of the present invention, the multi-core decoder system includes a shared reference data buffer coupled to the multiple decoder cores and an external memory. The shared reference data buffer stores reference data received from the external memory and provides the reference data the multiple decoder cores for decoding video data. The multi-core decoder system also includes one or more decoding progress synchronizers coupled to the multiple decoder cores to detect decoding-progress information associated with the multiple decoder cores or status information of the shared reference data buffer, and to control decoding progress for the multiple decoder cores. | 2016-06-30 |
20160191936 | IMAGE DECODING APPARATUS AND METHOD FOR HANDLING INTRA-IMAGE PREDICTIVE DECODING WITH VARIOUS COLOR SPACES AND COLOR SIGNAL RESOLUTIONS - The present invention is directed to an image information decoding apparatus adapted for performing infra-image decoding based on resolution of color components and color space of an input image signal. An intra prediction unit serves to adaptively change block size in generating a prediction image based on a chroma format signal indicating whether resolution of color components is one of 4:2:0 format, 4:2:2 format, and 4:4:4 format, and a color space signal indicating whether color space is one of YCbCr, RGB, and XYZ. An inverse orthogonal transform unit and an inverse quantization unit serve to also change orthogonal transform technique and quantization technique in accordance with the chroma format signal and the color space signal. A decoding unit decodes the chroma format signal and the color space signal to generate a prediction image corresponding to the chroma format signal and the color space signal. | 2016-06-30 |
20160191937 | VIDEO DATA PROCESSING SYSTEM - The technology described herein facilitates parallel encoding of two groups of blocks of data of a sequence of blocks of data, whilst also facilitating the use of dependent encoding across the sequence of data blocks. This is achieved by allocating pairs of first and second groups of data blocks to separate encoding units, and determining an encoding parameter value to be used for encoding the first block of each second group of data blocks. For correct reconstruction of the image, it is ensured that a block belonging to the first group of data blocks of a pair of groups of data blocks is encoded with an encoding parameter value that will cause a decoder to use the determined encoding parameter value when decoding the first block of the second group. | 2016-06-30 |
20160191938 | MARKING PICTURES FOR INTER-LAYER PREDICTION - A method for video coding is described. Signaling of a maximum number of sub-layers for inter-layer prediction is obtained. A sub-layer non-reference picture is also obtained. It is determined whether a value of a temporal identifier of the sub-layer non-reference picture is greater than the maximum number of sub-layers for inter-layer prediction minus 1. The sub-layer non-reference picture is marked as “unused for reference” if the value of the temporal identifier of the sub-layer non-reference picture is greater than the maximum number of sub-layers for inter-layer prediction minus 1. In some cases a sub-layer non-reference picture is also obtained. It is determined whether a value of a temporal identifier of the sub-layer non-reference picture is greater than the maximum number of sub-layers for inter-layer prediction. The sub-layer non-reference picture is marked as “unused for reference” if the value of the temporal identifier of the sub-layer non-reference picture is greater than the maximum number of sub-layers for inter-layer prediction. | 2016-06-30 |
20160191939 | Method and Apparatus for Removing Redundancy in Motion Vector Predictors - A method for video coding a current block coded in an Inter, Merge, or Skip mode. The method determines neighboring blocks of the current block, wherein a motion vector predictor (MVP) candidate set is derived from MVP candidates associated with the neighboring blocks. The method determines at least one redundant MVP candidate, if said MVP candidate is within a same PU (Prediction Unit) as another MVP candidate in the MVP candidate set. The method removes said at least one redundant MVP candidate from the MVP candidate set, and provides a modified MVP candidate set for determining a final MVP, wherein the modified MVP candidate set corresponds to the MVP candidate set with said at least one redundant MVP candidate removed. Finally, the method encodes or decodes the current block according to the final MVP. A corresponding apparatus is also provided. | 2016-06-30 |
20160191940 | METHOD AND DEVICE FOR VIDEO ENCODING OR DECODING BASED ON IMAGE SUPER-RESOLUTION - A method for video encoding based on an image super-resolution, the method including: 1) performing super-resolution interpolation on a video image to be encoded using a pre-trained texture dictionary database to yield a reference image; in which the texture dictionary database includes: one or multiple dictionary bases, and each dictionary basis includes a mapping group formed by a relatively high resolution image block of a training image and a relatively low resolution image block corresponding to the relatively high resolution image block; 2) performing motion estimation and motion compensation of image blocks of the video image on the reference image to acquire prediction blocks corresponding to the image blocks of the video image; 3) performing subtraction between the image blocks of the video image and the corresponding prediction blocks to yield prediction residual blocks, respectively; and 4) encoding the prediction residual blocks. | 2016-06-30 |
20160191941 | APPARATUS FOR DECODING A MOVING PICTURE - Provided is an apparatus for decoding a moving picture. An inverse quantization/transformation unit generates a quantized block by inversely scanning a quantized coefficient sequence, generates a transform unit by inversely quantizing the quantized block using a quantization step size, and generates a residual block by inversely transforming the transform block. An inter prediction unit generates a prediction block of a current prediction unit based on motion vector information. An adding unit generates a restored block using the residual block and the prediction block. When a motion information coding mode is a skip mode, the inter prediction unit restores motion information of the current prediction block using an available spatial or temporal skip candidate The temporal skip candidate includes a reference picture index and a motion vector, the reference picture index of the temporal skip candidate is set to 0, and a motion vector of the temporal skip candidate is a motion vector of the temporal skip candidate in a temporal skip candidate picture. A size of the prediction unit is same with a size of a coding unit and the prediction block is set as the restored block. | 2016-06-30 |
20160191942 | APPARATUS FOR DECODING A MOVING PICTURE - Provided is an apparatus for decoding a moving picture. An entropy decoding unit restores a quantization coefficient sequence from a bitstream. An inverse quantization/transform unit generates a residual block. An inter prediction unit generates a prediction block of a current block based on motion vector information. When the prediction block is encoded in skip mode, motion information of the current block is restored using an available spatial or temporal skip candidate and the prediction block of the current block is generated using the motion information. The temporal skip candidate includes a reference picture index and a motion vector, the reference picture index of the temporal skip candidate is set to 0, and a motion vector of the temporal skip candidate is a motion vector of the temporal skip candidate in a temporal skip candidate picture. A scan pattern for inversely scanning the plurality of subsets is the same as a scan pattern for inversely scanning coefficients of each subset. The quantization step size is generated by adding a quantization step size predictor and a remaining quantization step size predictor. | 2016-06-30 |
20160191943 | APPARATUS FOR DECODING A MOVING PICTURE - Provided is an apparatus for decoding a moving picture. An entropy decoding unit restores a quantization coefficient sequence from a bitstream. An inverse quantization/transform unit generates a quantized block by inversely scanning the quantization coefficient sequence in a unit of subset when a size of a transform unit is larger than 4×4 and generates a residual block. An inter prediction unit generates a prediction block of a current block based on motion vector information. When the prediction block is encoded in merge mode, motion information is restored using an available spatial or temporal merge candidate and the prediction block is generated using the motion information. The temporal merge candidate includes a reference picture index and a motion vector, the reference picture index of the temporal merge candidate is set to 0, and a motion vector of the temporal merge candidate is a motion vector of the temporal merge candidate in a temporal merge candidate picture. A scan pattern for inversely scanning the plurality of subsets is the same as a scan pattern for inversely scanning coefficients of each subset. | 2016-06-30 |
20160191944 | METHOD FOR DECODING A MOVING PICTURE - Provided is a method for decoding a moving picture. The method has the steps of generating a prediction block of a current block and generating a residual block of the current block. To generate the prediction block, a reference picture index and motion vector difference of the current block are obtained from a received bit stream, and spatial and temporal motion vector candidates are derived to construct a motion vector candidate list. A motion vector candidate corresponding to a motion vector index is determined as a motion vector predictor, and a motion vector of the current prediction unit is restored to generate a prediction block or the current block. Therefore, the motion vector encoded effectively using spatial and temporal motion vector candidates is correctly recovered and the complexity of a decoder is reduced. | 2016-06-30 |
20160191945 | METHOD AND SYSTEM FOR PROCESSING VIDEO CONTENT - Various aspects of a method and system to process video content are disclosed herein. In accordance with an embodiment, the method includes determination of a plurality of motion vectors based on a pixel displacement of one or more pixels. The pixel displacement of the one or more pixels in a current frame of a video content is determined with respect to corresponding one or more pixels in a previous frame of the video content. Based on the determined plurality of motion vectors, motion direction of the current frame is extracted. Based on the extracted motion direction, real-time motion annotation information of the current frame is determined. | 2016-06-30 |
20160191946 | COMPUTATIONALLY EFFICIENT MOTION ESTIMATION - The detailed description presents innovations in performing motion estimation during digital video media encoding. In one example embodiment, motion estimation is performed using a lower-complexity sub-pixel interpolation filter configured to compute sub-pixel values for two or more candidate prediction regions at a sub-pixel offset, the two or more candidate prediction regions being located in one or more reference frames. For a selected one of the candidate prediction regions at the sub-pixel offset, motion compensation is performed using a higher-complexity sub-pixel interpolation filter. | 2016-06-30 |
20160191947 | METHOD FOR SELECTING MOTION VECTOR PREDICTOR AND DEVICE USING SAME - A method for selecting a motion vector predictor is provided. The method of selecting a motion vector predictor includes the steps of selecting motion vector predictor candidates for a current block and selecting a motion vector predictor of the current block out of the motion vector predictor candidates, wherein the motion vector predictor candidates for the current block include a motion vector of a first candidate block which is first searched for as an available block out of left neighboring blocks of the current block and a motion vector of a second candidate block which is first searched for as an available block out of upper neighboring blocks of the current block. | 2016-06-30 |
20160191948 | Motion Estimation in a Video Sequence - Aspects of the present invention are related to systems and methods for determining local-analysis-window size and weighting parameters in a gradient-based motion estimation system. | 2016-06-30 |
20160191949 | CONDITIONAL SIGNALLING OF REFERENCE PICTURE LIST MODIFICATION INFORMATION - Innovations in signaling of reference picture list (“RPL”) modification information. For example, a video encoder evaluates a condition that depends at least in part on a variable indicating a number of total reference pictures. Depending on the results of the evaluation, the encoder signals in a bitstream a flag that indicates whether an RPL is modified according to syntax elements explicitly signaled in the bitstream. A video decoder evaluates the condition and, depending on results of the evaluation, parses from a bitstream a flag that indicates whether an RPL is modified according to syntax elements explicitly signaled in the bitstream. The condition can be evaluated as part of processing for an RPL modification structure that includes the flag, or as part of processing for a slice header. The encoder and decoder can also evaluate other conditions that affect syntax elements for list entries of the RPL modification information. | 2016-06-30 |
20160191950 | Reference Picture List Management Syntax for Multiple View Video Coding - A picture reference list ordering process is defined for a multiview coder for coding moving pictures, where the picture list has the coding order of reference pictures used to code a picture specified in relationship to whether a picture to be coded is associated with a view. The ordering of the picture list will therefore change the coding order of the reference pictures in the picture reference list depending on the temporal relationships the reference pictures have with the picture to be coded and views associated with the reference picture. | 2016-06-30 |
20160191951 | SECONDARY BOUNDARY FILTERING FOR VIDEO CODING - In one example, a video coding device is configured to intra-predict a block of video data, using values of pixels along a primary boundary of the block, to form a predicted block, determine whether to filter the predicted block using data of a secondary boundary of the block, and filter the predicted block using data of the secondary boundary in response to determining to filter the predicted block. The video coding device may determine whether to filter the predicted block based on a comparison of a Laplacian value or a gradient difference value to a threshold. The determination of whether to filter the predicted block may be based at least in part on a boundary relationship, e.g., the relationship of one boundary to another, or of a boundary to pixel values of the predicted block. | 2016-06-30 |
20160191952 | DEGRADATION COMPENSATION APPARATUS, DISPLAY DEVICE INCLUDING THE DEGRADATION COMPENSATION APPARATUS, AND DEGRADATION COMPENSATION METHOD - A degradation compensation apparatus including: a calculator provided with gray data regarding a plurality of consecutive frames, the calculator calculating and outputting a frame degradation amount of a current frame, which indicates a degree of degradation of the current frame; a memory accumulating and storing the frame degradation amount of the current frame and outputting a cumulative degradation amount, which is an accumulated degree of degradation of frames up to the current frame; and a data corrector correcting the gray data for a subsequent frame based on the cumulative degradation amount. Each of the plurality of consecutive frames includes first and second blocks each having a plurality of pixels, and the frame degradation amount is calculated based on one of the pixels included in the first block and one of the pixels included in the second block. | 2016-06-30 |
20160191953 | ARITHMETIC ENCODING-DECODING METHOD AND CODEC FOR COMPRESSION OF VIDEO IMAGE BLOCK - An arithmetic encoding-decoding method for compression of a video image block. The method includes an encoding process and a decoding process. The encoding process includes: 1) acquiring an information of an image block to be encoded; 2) extracting an encoding command of a weighted skip model; 3) acquiring an index of a reference frame according to the information of the image block to be encoded and the command of the weighted skip model, in which the reference frame includes a prediction block for reconstructing the image block to be encoded; 4) acquiring a context-based adaptive probability model for encoding; and 5) performing arithmetic encoding of the index of the reference frame and writing arithmetic codes into an arithmetically encoded bitstream according to the context-based adaptive probability model for encoding. | 2016-06-30 |
20160191954 | AUTOMATED VIDEO CONTENT PROCESSING - Video content is processed for delivery using an automated process that allows for convenient packaging of encrypted or digital rights management (DRM) protected content in a manner such that the packaged content can be efficiently stored in a content delivery network (CDN) or other content source for subsequent re-use by other media clients without re-packaging, and without excessive storage of unused content data. | 2016-06-30 |
20160191955 | SOFTWARE DEFINED NETWORKING IN A CABLE TV SYSTEM - Systems and methods presented herein provide for a software defined network (SDN) controller in a cable television system that virtualizes network elements in the cable television system to provide content delivery and data services through the virtualized network elements. In one embodiment, the SDN controller is operable in a cloud computing environment to balance data traffic through the virtualized network elements. For example, the SDN controller may process a request for content from a user equipment (UE), determine a bandwidth capability of the UE, determine that bandwidth of the requested content exceeds the bandwidth capability of the UE, analyze the bandwidth capacity of the network elements, generate a virtual channel through the network elements based on the bandwidth capacity of the network elements, and to deliver the content to the UE through the virtualized channel. | 2016-06-30 |
20160191956 | SOFTWARE DEFINED NETWORKING IN A CABLE TV SYSTEM - Systems and methods presented herein provide for a software defined network (SDN) controller in a cable television system that virtualizes network elements in the cable television system to provide content delivery and data services through the virtualized network elements. In one embodiment, the SDN controller is operable in a cloud computing environment to balance data traffic through the virtualized network elements. For example, the SDN controller may abstract Layer 2 Control Protocol (L2CP) frame processing of the network elements into the cloud computing environment to relieve the network elements from the burdens of Ethernet frame processing. In this regard, the SDN controller comprises a L2CP decision module that determines how L2CP should be processed for the network elements in the cable television system. | 2016-06-30 |
20160191957 | LULL MANAGEMENT FOR CONTENT DELIVERY - Primary media content played on a media device, such as a television, handheld device, smart phone, computer, or other device, is sampled and data is derived from the sample for identification of the primary media content. Supplementary digital content is then selected and transmitted to the media device, or to another device, based upon the identified primary media content. The supplementary digital content may be adapted in layout, type, length, or other manners, based upon the platform and/or configuration of the media device or any other device to which the supplementary digital content is transmitted. | 2016-06-30 |
20160191958 | Systems and methods of providing contextual features for digital communication - Embodiments disclosed herein may be directed to a video communication server for: receiving, using a communication unit comprised in at least one processing device, video content of a video communication connection between a first user of a first user device and a second user of a second user device; analyzing, using a graphical processing unit (GPU) comprised in the at least one processing device, the video content in real time; identifying, using a recognition unit comprised in the at least one processing device, at least one object of interest comprised in the video content; identifying, using a features unit comprised in the least one processing device, at least one contextual feature associated with the at least one identified object of interest; and presenting, using an input/output (I/O) device, the at least one contextual feature to at least one of the first user device and the second user device. | 2016-06-30 |