07th week of 2019 patent applcation highlights part 72 |
Patent application number | Title | Published |
20190052813 | METHODS AND APPARATUS FOR IMPLEMENTING ZOOM USING ONE OR MORE MOVEABLE CAMERA MODULES - Methods and apparatus for implementing a camera device including multiple camera modules and which supports zoom operations are described. A plurality of moveable camera modules are included with the position of the moveable camera modules being controlled as a function of a zoom setting. One or more fixed camera modules are also included to facilitate image combining. A fixed camera module having a smaller focal length than the movable camera modules is used in some embodiments to capture a scene area including scene area portions which will be captured by movable camera modules. The image captured by the fixed camera module, with the small focal length, is used in aligning images captured by the movable camera modules during generation of a composite image. The camera may also include another fixed camera module, e.g., having the same focal length as the movable camera modules, for capturing the center of a scene. | 2019-02-14 |
20190052814 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM - An information processing apparatus includes an obtaining unit configured to obtain first setting information including information related to a first specified area in an image and related to image analysis processing and second setting information including information related to a second specified area in the image and related to the image analysis processing of the same type as the first setting information, wherein the second setting information has been created before the first setting information, a determination unit configured to determine an overlapped area of the first and second specified areas on a basis of the first setting information and the second setting information, and a decision unit configured to decide processing to be executed with respect to at least one of the first setting information and the second setting information in accordance with a determination result of the determination unit. | 2019-02-14 |
20190052815 | DUAL-CAMERA IMAGE PICK-UP APPARATUS AND IMAGE CAPTURING METHOD THEREOF - A dual-camera image pick-up apparatus and an imaging capturing method thereof are provided. The dual-camera image pick-up apparatus includes a first camera, a second camera, a first buffer, a second buffer and an image signal processor (ISP). The second buffer is coupled to the second camera and configured to receive and temporarily store a raw file captured by the second camera. The ISP is coupled to the first camera, the second camera, the first buffer and the second buffer, and configured to respectively control the first camera and the second camera to capture image signals and output a first raw file and a second raw file, process the image signals in the first raw file to output a first image to the first buffer, and afterward receive the temporarily stored second raw file from the second buffer, and process the image signals in the second raw file to output a second image to the first buffer. | 2019-02-14 |
20190052816 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - The present disclosure relates to an information processing apparatus and information processing method that are aimed at enabling arrangements of a plurality of photographing devices to be easily set to an optimum arrangement for generation of three-dimensional data. An evaluation section calculates an evaluation value of an arrangement for generation of the three-dimensional data on the basis of the arrangements of the plurality of photographing devices that photograph two-dimensional image data used to generate the three-dimensional data of a photographic object. For example, the present disclosure is applicable to the information processing apparatus etc. that display information indicating the arrangements of the plurality of photographing devices that photograph the two-dimensional image data used to generate the three-dimensional data of the photographic object. | 2019-02-14 |
20190052817 | A DISPLAY EXPOSURE MONITORING MODULE - A display exposure monitoring module configured to monitor the exposure of a person to at least a display, the module including a communication component configured to receive display exposure data indicative of the exposure of the person to a display, a memory storing computer executable instructions and configured to store the received display exposure data; and a processor for executing the computer executable instructions, wherein the computer executable instructions includes instructions for processing the display exposure data to generate display exposure information indicative of an alert information and/or a visual behavior recommendation and/or an activation parameter. | 2019-02-14 |
20190052818 | IMAGING SYSTEMS WITH PULSE DETECTION FOR SEARCH AND RESCUE - An imaging method includes imaging a scene having a pulsed light source and associating a symbol with the light source. The image is enhanced by inserting a symbol into the image indicative of location of the pulsed light source in the scene. The symbol overlays the image in spatial registration with the location of the pulsed light source in the scene to augment indication of the location provided by the pulsed light source. Imaging systems are also described. | 2019-02-14 |
20190052819 | METHODS, APPARATUS AND ARTICLES OF MANUFACTURE TO PROTECT SENSITIVE INFORMATION IN VIDEO COLLABORATION SYSTEMS - Methods, apparatus, systems and articles of manufacture to protect sensitive information in video collaboration systems are disclosed. A disclosed example method includes an analytics engine to recognize a feature in a first frame of a first video stream, a policy enforcer to apply an obscuration policy to the recognized feature to identify whether to mask the recognized feature, and a masker to obscure the recognized feature in the first frame to form a second frame in a second video stream. | 2019-02-14 |
20190052820 | An Event-Based Vision Sensor - According to the present invention there is provided a vision sensor comprising, an array of pixels ( | 2019-02-14 |
20190052821 | A Vision Sensor, a Method of Vision Sensing, and a Depth Sensor Assembly - According to the present invention there is provided a vision sensor comprising, an array of pixels comprising rows and columns of pixels, wherein each pixel in the array comprises, a photosensor which is configured to output a current proportional to the intensity of light which is incident on the photosensor; a current source which is configured such that it can output a current which has a constant current level which is equal to the current level of the current output by the photosensor at a selected first instant in time, and can maintain that constant current level even if the level of the current output from the photosensor changes after said selected first instant in time; an integrator which is configured to integrate the difference between the level of current output by the current source and the level of current output by the photosensor, after the selected first instant in time; wherein the vision sensor further comprises a counter which can measure time, wherein the counter is configured such that it can begin to measure time at the selected first instant; and wherein each pixel in the array further comprises a storage means which can store the value on the counter at a second instant in time, the second instant in time being the instant when the integration of the difference between the level of current output by the current source and the level of current output by the photosensor of that pixel reaches a predefined threshold level. There is further provided a corresponding method of vision sensing, and a depth sensor assembly which comprises the vision sensor. | 2019-02-14 |
20190052822 | SIGNAL PROCESSING DEVICE, IMAGING ELEMENT, AND ELECTRONIC APPARATUS - A signal processing device includes a comparison unit to compare a signal level of an analog signal with a signal level of a reference signal; a selection unit configured to select the reference signal to be supplied to the comparison unit; and a switching unit capable of switching a signal line connected to an input terminal of the comparison unit such that a signal line via which the selected reference signal is transmitted is connected to the input terminal of the comparison unit, wherein the comparison unit includes a floating node as the input terminal, the selection unit includes a signal line in which a parasitic capacitance is caused between the signal line and the floating node as the input terminal of the comparison unit, and the signal line of the selection unit is configured to transmit an identical level of signal in multiple comparison processes of the comparison unit. | 2019-02-14 |
20190052823 | IMAGE SENSOR FOR COMPENSATING FOR SIGNAL DIFFERENCE BETWEEN PIXELS - An image sensor includes two or more phase-difference detection pixels disposed adjacent to each other, a plurality of general pixels spaced apart from the phase-difference detection pixels, first and second peripheral pixels, and first to third light shields. The first and second peripheral pixels are adjacent to the phase-difference detection pixels, and between the phase-difference detection pixels and the general pixels. The first light shield is disposed in one of the general pixels and has a first width. The second light shield extends into the first peripheral pixel from a first area between the phase-difference detection pixels and the first peripheral pixel, and has a second width different from the first width. The third light shield extends into the second peripheral pixel from a second area between the phase-difference detection pixels and the second peripheral pixel, and has a third width different from the first width. | 2019-02-14 |
20190052824 | PHOTOELECTRIC CONVERSION APPARATUS AND PHOTOELECTRIC CONVERSION SYSTEM - In a photoelectric conversion apparatus, a pixel transistor and a differential transistor form a differential pair. A clamp circuit clamps a gate voltage of the differential transistor. An output circuit performs a first operation in which a voltage based on the voltage at the gate of a pixel transistor is output to the gate of the differential transistor. The output circuit also performs a second operation in which in response to receiving a current from the differential transistor, a signal based on a result of a comparison between the gate voltage of the pixel transistor and the gate voltage of the differential transistor is output to the output node. In the second operation, a control unit in the output circuit controls a change in the drain voltage of the differential transistor to be smaller than a change in the voltage at the output node. | 2019-02-14 |
20190052825 | IMAGE SENSOR, ELECTRONIC APPARATUS, COMPARATOR, AND DRIVE METHOD - The present technology relates to an image sensor, an electronic apparatus, a comparator, and a drive method which enable achievement of a noise reduction while maintaining high speed of AD conversion. An ADC for performing AD conversion for an electrical signal output from a pixel includes a comparator that compares the electrical signal and a reference signal, a level of which is changed and a counter that counts time necessary for a change of the reference signal to a coincidence of the electrical signal and the reference signal on the basis of output signals from the comparator. The comparator includes a differential amplifier that outputs a comparison result signal indicating a comparison result obtained by comparing the electrical signal and the reference signal and a plurality of output amplifiers that outputs signals obtained by amplifying the comparison result signal output from the differential amplifier as the output signals at different timings. The present technology can be applied to, for example, an ADC that performs AD conversion for an electrical signal output from a pixel. | 2019-02-14 |
20190052826 | COMPARATOR CIRCUIT, SOLID-STATE IMAGING APPARATUS, AND ELECTRONIC DEVICE - The present technology relates to a comparator circuit, a solid-state imaging apparatus, and an electronic device which enable to improve a frame rate. A comparator compares an analog signal with a reference signal, an amplification stage amplifies output of a comparing unit and has different output change speeds in normal rotation and in reverse rotation, and a switch circuit fixes an input node or an output node of the amplification stage to a predetermined voltage in a predetermined period before a comparing operation by the comparator so that the amplification stage operates in a change direction having a higher output change speed. The present technology can be applied to a comparator circuit provided to an A/D converter of a CMOS image sensor. | 2019-02-14 |
20190052827 | A/D CONVERTER, SOLID-STATE IMAGING DEVICE, METHOD FOR DRIVING SOLID-STATE IMAGING DEVICE, AND ELECTRONIC APPARATUS - An A/D converter includes a reference voltage generating circuit that generates a reference voltage of a ramp waveform in which a voltage value changes with time, a gray code generating circuit that outputs a gray code based on a same reference clock as the reference voltage generating circuit, a comparison circuit that compares the reference voltage with an input voltage, a latch circuit that holds a count value of the gray code based on an output signal of the comparison circuit, a code conversion circuit that serially converts the count value of the gray code held in the latch circuit into a binary code, and a calculation processing circuit that stores a count value of the binary code output from the code conversion circuit, and performs calculation processing based on the stored count value of the binary code and a next input count value of the binary code. | 2019-02-14 |
20190052828 | IMAGING ELEMENT AND ELECTRONIC DEVICE - The present technology relates to an imaging element and an electronic device that enable pixels to flexibly share a charge voltage converting unit. The imaging element includes a pixel array unit in which pixels respectively having charge voltage converting units and switches are arranged, and the charge voltage converting units of the plurality of pixels are connected to a signal line in parallel via the respective switches. The present technology is applied to, for example, a Complementary Metal Oxide Semiconductor (CMOS) image sensor in which pixels share a charge voltage converting unit. | 2019-02-14 |
20190052829 | IMAGE PICKUP APPARATUS AND METHOD UTILIZING THE SAME LINE RATE FOR UPSCALING AND OUTPUTTING IMAGE - An image pickup apparatus includes an image sensor array, a scaling circuit, an output circuit and a timing control circuit. The image sensor array reads N pixel lines according to a read timing control signal to capture N lines of pixel data of a source image. The scaling circuit receives the N lines of pixel data according to a scaling timing control signal, and refers to a scaling factor to scale up the source image to generate an upscaled image having M lines of pixel data. M is a positive integer greater than N. The output circuit outputs the M lines of pixel data according to an output timing. The timing control circuit determines a receiving timing according to the output timing and the scaling factor to generate the scaling timing control signal, and determines a read timing according to the receiving timing to generate the read timing control signal. | 2019-02-14 |
20190052830 | Methods and Systems for Thermal Image Display - An example method is performed by a thermal imaging device. The method includes capturing a thermal image of a vehicle and displaying the thermal image within a first region of a display of the thermal imaging device. The method further includes displaying information related to servicing the vehicle within a second region of the display that is distinct from the first region. Other example methods performed by a thermal imaging device are also disclosed herein. | 2019-02-14 |
20190052831 | Methods and Systems for Thermal Image Display - An example method is performed by a thermal imaging device. The method includes capturing a thermal image of a vehicle and displaying the thermal image within a first region of a display of the thermal imaging device. The method further includes displaying information related to servicing the vehicle within a second region of the display that is distinct from the first region. Other example methods performed by a thermal imaging device are also disclosed herein. | 2019-02-14 |
20190052832 | VIDEO PROCESSING DEVICE, TRANSMITTING DEVICE, CONTROL PROGRAM, AND RECORDING MEDIUM - A technology of preventing feeling of a viewer on brightness from significantly changing when content is switched is provided. A transmitting device ( | 2019-02-14 |
20190052833 | VIDEO DISPLAY DEVICE, TELEVISION RECEIVER, TRANSMITTING DEVICE, CONTROL PROGRAM, AND RECORDING MEDIUM - A technology of preventing feeling of a viewer on brightness from significantly changing when content is switched is provided. A video display device ( | 2019-02-14 |
20190052834 | RECEPTION DEVICE AND BROADCASTING SYSTEM - When a video of a program is determined as a high dynamic range video, which has a luminance range wider than a standard luminance range, on the basis of service information of the program, a display information output unit outputs a luminance setting screen that has an adjustment point related to predetermined subject luminance to be specified, a luminance setting unit sets target luminance of the adjustment point on the basis of an operation, and a luminance adjustment unit adjusts luminance of the adjustment point and luminance of the video on the basis of luminance adjustment information indicating the target luminance. | 2019-02-14 |
20190052835 | BEAM FORMING FOR MICROPHONES ON SEPARATE FACES OF A CAMERA - A camera system capable of capturing images of an event in a dynamic environment includes two microphones configured to capture stereo audio of the event. The microphones are on orthogonal surfaces of the camera system. Because the microphones are on orthogonal surfaces of the camera system, the camera body can impact the spatial response of the two recorded audio channels differently, leading to degraded stereo recreation if standard beam forming techniques are used. The camera system includes tuned beam forming techniques to generate multi-channel audio that more accurately recreates the stereo audio by compensating for the shape of the camera system and the orientation of microphones on the camera system. The tuned beam forming techniques include optimizing a set of beam forming parameters, as a function of frequency, based on the true spatial response of the recorded audio signals. | 2019-02-14 |
20190052836 | IMAGE PROJECTION APPARATUS - An image projection apparatus includes an image signal inputter configured to input an image signal, a first light modulation element configured to modulate light from a light source, a second light modulation element configured to modulate light from the first light modulation element, an optical system configured to guide a projection image in which an image formed by the light modulated by the first light modulation element and an image formed by the light modulated by the second light modulation element are superimposed on each other to a projection optical system, a first driver configured to drive one of the first and second light modulation elements based on the image signal, and a second driver configured to drive the other of the first and second light modulation elements based on a luminance correction data irrelevant to the image signal. | 2019-02-14 |
20190052837 | VIDEO SIGNAL CONVERSION DEVICE, VIDEO SIGNAL CONVERSION METHOD, VIDEO SIGNAL CONVERSION SYSTEM, CONTROL PROGRAM, AND RECORDING MEDIUM - A video signal conversion device is realized in which even in a case where an HDR video signal following a first video format is converted into a HDR video signal following a second video format, supplementary information related to the HDR video signal following the original first video format is not lost. The video signal conversion device ( | 2019-02-14 |
20190052838 | SYSTEM AND METHOD FOR SHARING SENSED DATA BETWEEN REMOTE USERS - A method and a system for sharing in video images, captured by a video image capturing device mounted on a source user and having a wide field of view, with a destination user, are provided herein. The method may include: receiving video images and respective positions and orientations thereof, captured by the video image capturing device at the source location; receiving a request from a destination user equipment, to view video images captured at the source location, wherein the request includes a line of sight of the destination user, as derived by an orientation sensor of a destination user headset; mapping the request to a respective region of interest of the destination user based on the line of sight; cropping the video images based on the respective region of interest and further based on said respective positions and orientations; and transmitting the cropped video images to the destination user. | 2019-02-14 |
20190052839 | FACIAL GESTURE RECOGNITION AND VIDEO ANALYSIS TOOL - Embodiments disclosed herein may be directed to a video communication server. In some embodiments, the video communication server includes: at least one memory including instructions; and at least one processing device configured for executing the instructions, wherein the instructions cause the at least one processing device to perform the operations of: determining a time duration of a video communication connection between a first user of a first user device and a second user of a second user device; analyzing video content transmitted between the first user device and the second user device; determining at least one gesture of at least one of the first user and the second user based on analyzing the video content; and generating a compatibility score of the first user and the second user based at least in part on the determined time duration and the at least one determined gesture. | 2019-02-14 |
20190052840 | METHODS AND SYSTEMS FOR MULTI-PANE VIDEO COMMUNICATIONS TO EXECUTE USER WORKFLOWS - Systems and methods are disclosed for generating a workflow and notifying a user of the workflow using a variety of communication channels. For example, in one or more embodiments, a system generates a workflow comprising a plurality of tasks and one or more display elements corresponding to the workflow tasks. Subsequently, the system sends a first notification to a user regarding the first task via a first communication channel (e.g., a mobile application) and sends a second notification via a second communication channel (e.g., email). If the system receives a user selection of one of the notifications from a client device associated with the user, the system can subsequently provide a first display element corresponding to the first task to the client device via the corresponding communication channel. | 2019-02-14 |
20190052841 | Display an Image During a Communication - An electronic device displays an image during a communication between two people. The image represents one of the people to the communication. The electronic device determines a location where to place the image and displays the image such that the image appears to exist at the location. | 2019-02-14 |
20190052842 | System and Method for Improved Obstable Awareness in Using a V2x Communications System - A system and method is taught for collaborative vehicle to all (V2X) communications to improve autonomous driving vehicle performance in a heterogeneous capability environment by sharing capabilities among different vehicles. In particular, the system and method are operative to receive an image from a proximate vehicle and to augment a display within the host vehicle by providing a view of objects with an area of obstructed view. | 2019-02-14 |
20190052843 | VEHICLE MONITOR SYSTEM - A vehicle monitor system includes: a detection unit that detects a vehicle located on the rear lateral side of a host vehicle; a rear lateral imaging unit that images a prescribed angle range on the rear lateral side of the host vehicle; a display unit that displays an image captured by the rear lateral imaging unit; and an image processor that extracts a first image in a range of a first angle with reference to a rear side of the host vehicle from the captured image, and displays the extracted first image on the display unit, wherein when the vehicle is detected by the detection unit, the image processor extracts the first image and a second image in a range of a second angle greater than the first angle with reference to the rear side from the captured image, and displays the first and second images on the display unit. | 2019-02-14 |
20190052844 | Rotating LIDAR with Co-Aligned Imager - Example implementations are provided for an arrangement of co-aligned rotating sensors. One example device includes a light detection and ranging (LIDAR) transmitter that emits light pulses toward a scene according to a pointing direction of the device. The device also includes a LIDAR receiver that detects reflections of the emitted light pulses reflecting from the scene. The device also includes an image sensor that captures an image of the scene based on at least external light originating from one or more external light sources. The device also includes a platform that supports the LIDAR transmitter, the LIDAR receiver, and the image sensor in a particular relative arrangement. The device also includes an actuator that rotates the platform about an axis to adjust the pointing direction of the device. | 2019-02-14 |
20190052845 | METHOD AND APPARATUS OF SECURED INTERACTIVE REMOTE MAINTENANCE ASSIST - A system and method for using the system comprising a head mounted device (HMD), a remote maintenance server (RMS), and a control section operable to identify the system under test (SUT) using an image recognition function, to identify a plurality of subsystems (PLoS) within the SUT in a data library, to create three dimensional models of the PLoS and displaying the same on a visual interface of the HMD using an augmented reality function, to connect to the RMS using an encryption algorithm via streaming video or images sent to the RMS, to collect SUT data and external (to the SUT) sensor data, to conduct a prognostics and/or health, maintenance, and/or management (HMM) service on the collected data to determine system health and projected health of the SUT and/or PLoS, to authenticate remote user access to the RMS, to update the data library, and to insert a plurality of (HMM) designators on the visual interface. | 2019-02-14 |
20190052846 | IMAGING DEVICE - An imaging device includes a plurality of first pixels that includes pixels of a plurality of color components and generates a first signal from incident light, a plurality of second pixels that generates a second signal from light that has transmitted at least a part of the first pixels, and a signal generation unit that generates a signal obtained by combining the first signal and the second signal. | 2019-02-14 |
20190052847 | ENDOSCOPE AND ENDOSCOPE SYSTEM - A four color separation endoscope prism includes a four color separation prism having a first color separation prism, a second color separation prism, a third color separation prism, and a fourth color separation prism which respectively separate light incident from an affected area into a blue, red and green color components, and an infrared (IR) component. The first color separation prism, the second color separation prism, the third color separation prism, and the fourth color separation prism are sequentially disposed from an object side when receiving the light incident from the affected area. | 2019-02-14 |
20190052848 | PROJECTION SYSTEM - A projection system includes an invisible light projector, a background member, an imaging unit, an image generator, and a visible light projector. The invisible light projector projects a predetermined invisible light image onto the object via invisible light. The background member is disposed behind the object in a direction of the invisible light emitted from the invisible light projector. The imaging unit captures an image of the invisible light projected from the invisible light projector. The image generator measures a shape of the object based on the image captured by the imaging unit to generate image data showing image content for projection onto the object in accordance with the measured shape. The visible light projector projects the image content shown by the image data onto the object via visible light. The background member has a light shielding surface that does not diffusely reflect the invisible light incident thereon. | 2019-02-14 |
20190052849 | CONTROLLED APPARATUS AND CONTROL METHOD THEREOF - A controlled apparatus that operates as a first controlled apparatus, comprises a communication unit configured to communicate with a second controlled apparatus, a detection unit configured to detect a second controlled apparatus having a predetermined relationship with the first controlled apparatus via the communication unit, and a control unit configured to generate a first web page capable of controlling the first controlled apparatus if the detection unit has not detected the second controlled apparatus having the predetermined relationship, and generate a second web page capable of collectively controlling the first controlled apparatus and the second controlled apparatus, if the detection unit has detected the second controlled apparatus having the predetermined relationship. | 2019-02-14 |
20190052850 | PROJECTION APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - A projection apparatus that projects a projection image includes a first panel, a second panel, a projection optical system that projects light that has passed through the first panel and the second panel, an information acquisition unit that acquires light reduction information of a region which is a part of the projection image and overlaps with an image projected by another projection apparatus, and a panel control unit that controls the first panel based on data of the projection image and controls the second panel based on the light reduction information. | 2019-02-14 |
20190052851 | System and Method for Recalibrating a Projector System - The invention is directed to a system and method for recalibrating a projector system. The projector may be calibrated using one or more cameras arranged near or adjacent to a projector of a projector system. The one or more cameras may be calibrated once and made mechanically stable so that they do not move relative to each other. During operation of the projector system, the projector may project one or more specific patterns onto a work surface, where the one or more cameras capture each of the projected patterns. All the captured images of the patterns by the one or more cameras may be used to determine projector calibration parameters in order to recalibrate the projector. | 2019-02-14 |
20190052852 | UNMANNED AERIAL VEHICLE SURFACE PROJECTION - Herein is disclosed an unmanned aerial vehicle comprising a memory, configured to store a projection image; an image sensor, configured detect image data of a projection surface within a vicinity of the unmanned aerial vehicle; one or more processors, configured to determine a depth information for a plurality of points in the detected image data; generate a transformed projection image from a projection image by modifying the projection image to compensate for unevenesses in the projection surface according to the determined depth information; and send the transformed projection image to an image projector; and an image projector, configured to project the transformed projection image onto the projection surface. | 2019-02-14 |
20190052853 | DISPLAY CONTROL DEVICE, DISPLAY APPARATUS, TELEVISION RECEIVER, CONTROL METHOD FOR DISPLAY CONTROL DEVICE, AND RECORDING MEDIUM - Color noise in a low luminance region of an image is reduced without reducing an information amount of the image. A noise reduction apparatus ( | 2019-02-14 |
20190052854 | IMAGE PROCESSING APPARATUS - An image processing apparatus determines, with a freeze signal, an image of a freeze target, or determines, with no freeze signal, a latest image as an image to be displayed; performs color-balance adjustments using first and second parameters, based on the imaging signal of the determined image, thereby generating first and second imaging signals, respectively; generates a display purpose imaging signal, based on the generated first imaging signal; detects signals of plural color components included in the second imaging signal; calculates, based on the detected signals, a color-balance parameter for the color-balance adjustment; sets, when inputting no freeze signal, a latest color-balance parameter calculated, as the first and the second parameters, or sets, when the freeze instruction signal is input, a color-balance parameter corresponding to the image of the freeze target as the first parameter and the latest color-balance parameter as the second parameter. | 2019-02-14 |
20190052855 | SYSTEM AND METHOD FOR DETECTING LIGHT SOURCES IN A MULTI-ILLUMINATED ENVIRONMENT USING A COMPOSITE RGB-IR SENSOR - A system and a method for detecting light sources in a multi-illuminated environment using a composite red-green-blue-infrared (RGB-IR) sensor is provided. The method comprises detecting, by the composite RGB-IR sensor, a multi-illuminant area using a visible raw image and a near-infrared (NIR) raw image of a composite RGBIR image, dividing each of the visible raw image and the NIR raw image into a plurality of grid samples, extracting a plurality of illuminant features based on a green/NIR pixel ratio and a blue/NIR pixel ratio, estimating at least one illuminant feature for each grid sample by passing each grid sample through a convolution neural network (CNN) module using the extracted plurality of illuminant features, and smoothing each grid sample based on the estimated at least one illuminant feature. | 2019-02-14 |
20190052856 | TRANSMITTER, TRANSMISSION METHOD, RECEIVER, AND RECEPTION METHOD - An association with a system timing at the time of transmission is secured without changing a display timing in text information of a subtitle, and a reception side displays the subtitle at an appropriate timing. | 2019-02-14 |
20190052857 | USER INTERFACE FOR ADJUSTMENT OF STEREOSCOPIC IMAGE PARAMETERS - According to an aspect of an embodiment, a method may include receiving, at a first interface element of a user interface, a first user input regarding a degree of stereoscopic depth rendered in a stereoscopic image. The method may also include adjusting the stereoscopic depth based on the first user input. Additionally, the method may include receiving, at a second interface element of the user interface, a second user input regarding adjustment of a z-plane position of the stereoscopic image and adjusting the z-plane position based on the second user input. The method may further include generating the stereoscopic image based on the adjustment of the stereoscopic depth and the adjustment of the z-plan position. | 2019-02-14 |
20190052858 | METHOD AND APPARATUS FOR PROCESSING 360-DEGREE IMAGE - A communication technique for merging, with an IoT technology, a 5G communication system for supporting a data transmission rate higher than that of a 4G system is provided. The communication technique can be applied to an intelligent service (for example, smart home, smart building, smart city, smart car or connected car, health care, digital education, retail business, and security and safety-related services, and the like) on the basis of a 5G communication technology and an IoT-related technology. A method for processing a 360-degree image is provided. The method includes determining a three-dimensional (3D) model for mapping a 360-degree image; determining a partition size for the 360-degree image; determining a rotational angle for each of the x, y, and z axes of the 360-degree image; determining an interpolation method to be applied when mapping the 360-degree image to a two-dimensional (2D) image; and converting the 360-degree image into the 2D image. | 2019-02-14 |
20190052859 | CODING OF 360 DEGREE VIDEOS USING REGION ADAPTIVE SMOOTHING - A video processing unit and method for region adaptive smoothing. The image processing unit includes a memory and one or more processors. The one or processors are operably connected to the memory and configured to stitch together a plurality of video frames into a plurality of equirectangular mapped frames of a video. The one or processors are configured to define a top region and a bottom region for each of the equirectangular mapped frames of the video; perform a smoothing process on the top region and the bottom region for each of the equirectangular mapped frames of the video; and encode the smoothed equirectangular mapped frames of the video. | 2019-02-14 |
20190052860 | Multi-Image Color-refinement with Application to Disparity Estimation - Systems, methods, and computer-readable media to improve multi-image color-refinement operations are disclosed for refining color differences between images in a multi-image camera system with application to disparity estimation. Recognizing that corresponding pixels between two (or more) images of a scene should have not only the same spatial location, but the same color, can be used to improve the spatial alignment of two (or more) such images and the generation of improved disparity maps. After making an initial disparity estimation and using it to align the images, colors in one image may be refined toward that of another image. (The image being color corrected may be either the reference image or the image(s) being registered with the reference image.) Repeating this process in an iterative manner allows improved spatial alignment between the images and the generation of superior disparity maps between the two (or more) images. | 2019-02-14 |
20190052861 | IMAGING APPARATUS AND IMAGE SENSOR ARRAY - An imaging apparatus including an imaging lens, and an image sensor array of first and second image sensor units, wherein a single first image sensor unit includes a single first microlens and a plurality of image sensors, a single second image sensor unit includes a single second microlens and a single image sensor, light passing through the imaging lens and reaching each first image sensor unit passes through the first microlens and forms an image on the image sensors constituting the first image sensor unit, light passing through the imaging lens and reaching each second image sensor unit passes through the second microlens and forms an image on the image sensor constituting the second image sensor unit, an inter-unit light shielding layer is formed between the image sensor units, and a light shielding layer is not formed between the image sensor units constituting the first image sensor unit. | 2019-02-14 |
20190052862 | TARGETS, FIXTURES, AND WORKFLOWS FOR CALIBRATING AN ENDOSCOPIC CAMERA - The present disclosure relates to calibration assemblies and methods for use with an imaging system, such as an endoscopic imaging system. A calibration assembly includes: an interface for constraining engagement with an endoscopic imaging system; a target coupled with the interface so as to be within the field of view of the imaging system, the target including multiple of markers having calibration features that include identification features; and a processor configured to identify from first and second images obtained at first and second relative spatial arrangements between the imaging system and the target, respectively, at least some of the markers from the identification features, and using the identified markers and calibration feature positions within the images to generate calibration data. | 2019-02-14 |
20190052863 | Three-Dimensional (3D) Image System and Electronic Device - The present application provides a three-dimensional (3D) image system, comprising a structural light module, configured to emit a structural light, wherein the structural light module comprises a first light-emitting unit, the first light-emitting unit receives a first pulse signal and emits a first light according to the first pulse signal, a duty cycle of the first pulse signal is less than a specific value, an emission power the first light-emitting unit is greater than a specific power, and the first light has a first wavelength; and a light-sensing pixel array, configured to receive a reflected light corresponding to the structural light. | 2019-02-14 |
20190052864 | DISPLAY METHOD AND SYSTEM FOR CONVERTING TWO-DIMENSIONAL IMAGE INTO MULTI-VIEWPOINT IMAGE - Display method and system for converting 2D image into multi-viewpoint image is disclosed, comprising: acquiring and tagging a target object within a 2D image; calculating a depth value according to a frequency component; generating a layered image before viewing from different preset viewpoints; tagging a viewpoint image; estimating before filling a pixel in a blank area of a virtual viewpoint image, based on a depth value difference of the layered image, generating and saving sequentially a single-viewpoint image output, before detecting and filling a blank area in it; detecting before smoothing a sudden change area; assembling to form a synthesized image, processing and sending to a naked-eye 3D display screen for displaying. It converts a 2D image to a multi-viewpoint image, provides a naked-eye 3D display, reduces image distortion, easy and convenient to use, with a low cost. | 2019-02-14 |
20190052865 | DYNAMIC BASELINE DEPTH IMAGING USING MULTIPLE DRONES - Systems and methods may include a drone or multiple drones to capturing depth information, which may be used to create a stereoscopic map. The drone may capture information about two trailing drones, including a baseline distance between the two trailing drones. Additional information may be captured, such as camera angle information for one or both of the two trailing drones. The drone may receive images from the two trailing drones. The images may be used (on the drone or on another device, such as a base station) to create a stereoscopic image using the baseline distance. The stereoscopic image may include determined depth information for objects within the stereoscopic image, for example based on the baseline distance between the two trailing drones and the camera angle information. | 2019-02-14 |
20190052866 | LIGHT FIELD DISPLAY APPARATUS AND METHOD FOR CALIBRATING DISPLAY IMAGE THEREOF - A light field display apparatus and a method for calibrating a display image thereof are provided. The method for calibrating a display image includes: dividing a display image to generate a plurality of block images; displaying the block images by a display, and passing the block images through a light field device to generate a combination image; providing an image capturer to capture the combination image, and comparing the display image and the combination image to generate error information; receiving user parameters; and adjusting at least one of the block images in the display image according to the user parameters and the error information. Without providing a mechanical adjustment device, the light field display apparatus of the invention can compensate for display errors that may be caused by the device-internal parameters and the user parameters, which not only enhances display quality, but also enhances convenience in use. | 2019-02-14 |
20190052867 | STEREOSCOPIC IMAGE APPRECIATION EYEGLASSES AND STEREOSCOPIC IMAGE DISPLAY DEVICE - An infrared polarizing filter is attached to an infrared synchronization signal radiator of a stereoscopic image display device which alternately displays right and left images by time-division with polarized light in one direction to radiate the polarized-light infrared synchronization signal. The problem with the occurrence of crosstalk is solved. Stereoscopic image appreciation eyeglasses have polarizing plates, visual field opening/closing liquid crystal cells and tilt correcting liquid crystal cells. The synchronization signal is received by a receiver mounted on an eyeglass frame. The tilt correcting liquid crystal cells are adjusted based on the eyeglass frame tilt angle detected. | 2019-02-14 |
20190052868 | WIDE VIEWING ANGLE VIDEO PROCESSING SYSTEM, WIDE VIEWING ANGLE VIDEO TRANSMITTING AND REPRODUCING METHOD, AND COMPUTER PROGRAM THEREFOR - A wide viewing angle video processing system comprises: a transmitting device configured to extract a plurality of background images and one or more foreground videos from an original wide viewing angle video, and to transmit the extracted plurality of background images and one or more foreground videos; and a display device configured to receive the plurality of background images and the one or more foreground videos from the transmitting device, to generate a wide viewing angle background image by combining the plurality of background images, and to generate a reconstructed wide viewing angle video in which the one or more foreground videos are composited on the wide viewing angle background image, and to display the reconstructed wide viewing angle video. The wide viewing angle video processing system can implement a wide viewing angle video having super resolution, which overcomes decoding limitations in conventional virtual reality devices. | 2019-02-14 |
20190052869 | Using a Sphere to Reorient a Location of a User in a Three-Dimensional Virtual Reality Video - A method includes generating graphical data for displaying a virtual reality user interface that includes a three-dimensional (3D) virtual reality video that is illustrated as being inside a sphere. The method further includes determining, based on movement of a peripheral device, that a user selects the sphere in the virtual reality user interface and the user moves the sphere on a head of the user. The method further includes displaying the 3D virtual reality video in the sphere surrounding the head of the user such that the user views a 360 degree environment corresponding to the 3D virtual reality video. | 2019-02-14 |
20190052870 | GENERATING A THREE-DIMENSIONAL PREVIEW FROM A TWO-DIMENSIONAL SELECTABLE ICON OF A THREE-DIMENSIONAL REALITY VIDEO - A method includes generating a three-dimensional (3D) virtual reality video by stitching together image frames of an environment captured by a camera array. The method further includes generating graphical data for displaying a virtual reality user interface that includes the 3D virtual reality video. The method further includes determining, based on movement of a peripheral device, that a user moves a hand to be located in front of the 3D virtual reality video in the user interface and grabs the 3D virtual reality video from a first location to inside an object. The method further includes displaying the object with a preview of the 3D virtual reality video inside the object. | 2019-02-14 |
20190052871 | STEREOSCOPIC DISPLAY OF OBJECTS - Stereoscopic display technologies are provided. A computing device generates a stereoscopic display of an object by coordinating a first image and a second image. To reduce discomfort or to reduce diplopic content, the computing device may adjust at least one display property of the first image and/or the second image depending on one or more factors. The factors may include a time period associated with the display of the object, the vergence distance to the object, the distance to the focal plane of the display, contextual data interpreted from the images and/or any combination of these and other factors. Adjustments to the display properties can include a modification of one or more contrast properties and/or other modifications to the images. The adjustment to the display properties may be applied with varying levels of intensity and/or be applied at different times depending on one or more factors and/or contextual information. | 2019-02-14 |
20190052872 | SYSTEMS AND METHODS FOR OPTICAL CORRECTION OF DISPLAY DEVICES - What is disclosed are systems and methods of optical correction for pixel evaluation and correction for active matrix light emitting diode device (AMOLED) and other emissive displays. Optical correction for correcting for non-homogeneity of a display panel uses sparse display test patterns in conjunction with a defocused camera as the measurement device to avoid aliasing (moiré) of the pixels of the display in the captured images. | 2019-02-14 |
20190052873 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE DISPLAY SYSTEM - An image processing apparatus includes a content interface, a communication interface and a controller configured to receive content from a content source through the content interface, encode image frames comprised in the content according to an encoding mode, and transmit the encoded image frames to a display apparatus through the communication interface. The controller encodes the image frames according to a first encoding option in a first encoding mode and encodes the image frames according to a second encoding option having a different encoding time delay from that of the first encoding option in a second encoding mode. | 2019-02-14 |
20190052874 | METHOD AND SYSTEM FOR FAST MODE DECISION FOR HIGH EFFICIENCY VIDEO CODING - Methods and systems for encoding video data are provided. Evolving standards for video encoding such as High Efficiency Video Coding (HEVC) standard require a significant increase in computational complexity for both inter and intra encoding. The method includes calculating an approximate cost of each of a first set of prediction modes. Then selecting a second set of prediction modes from the first set of prediction modes based on probability distributions associated with each of the modes in the first set of prediction modes, the second set having substantially fewer prediction modes than the first. A number of candidate prediction modes prior to rate distortion optimization (RDO) is reduced. Experimental results show that the proposed method provides substantial time reduction up and negligible quality loss as compared to the HEVC reference. | 2019-02-14 |
20190052875 | INTRA PREDICTION METHOD AND DEVICE IN VIDEO CODING SYSTEM - An intra prediction method includes the steps of: acquiring intra prediction mode information from a bitstream; deriving neighboring samples of a current block; determining an intra prediction mode for the current block on the basis of the intra prediction mode information; deriving a prediction sample of the current block on the basis of the intra prediction mode and the neighboring samples; determining filtering reference samples for the prediction sample on the basis of the intra prediction mode; and deriving a filtered prediction sample by applying filtering to the prediction sample on the basis of the filtering reference samples. According to the present invention, a prediction sample can be adaptively filtered according to the intra prediction mode, and intra prediction performance can be improved. | 2019-02-14 |
20190052876 | IMAGE ENCODING METHOD AND APPARATUS, AND IMAGE DECODING METHOD AND APPARATUS - Provided is a method of decoding motion information characterized in that information for determining motion-related information includes spatial information and time information, wherein the spatial information indicates a direction of spatial prediction candidates used for sub-units from among spatial prediction candidates located on a left side and an upper side of a current prediction unit, and the time information indicates a reference prediction unit of a previous picture used for prediction of the current prediction unit. Further, an encoding apparatus or a decoding apparatus capable of performing the above described encoding or decoding method may be provided. | 2019-02-14 |
20190052877 | ADAPTIVE IN-LOOP FILTERING FOR VIDEO CODING - Techniques related to coding video using adaptive in-loop filtering enablement are discussed. Such techniques may include determining whether or not to perform in-loop filtering based on evaluating a maximum coding bit limit of a picture of the video, a quantization parameter of the picture, and a coding structure of the video. | 2019-02-14 |
20190052878 | SYSTEMS AND METHODS FOR TRANSFORM COEFFICIENT CODING - A video coding device may be configured to receive a level value, estimate a characteristic of a reconstructed video block associated with the level value, adjust a quantization scale factor based on the estimated characteristic, and perform a quantization process on the level value based on the adjusted quantization scale factor. | 2019-02-14 |
20190052879 | METHOD OF DERIVING MOTION INFORMATION - A method of decoding video data using a merge mode can include constructing a merge list using available spatial and temporal merge candidates; determining a merge candidate on the merge list corresponding to a merge index as motion information of a current prediction unit; generating a predicted block of the current prediction unit using the motion information; generating a transformed block by inverse-quantizing a block of quantized coefficients using a quantization parameter; generating a residual block by inverse-transforming the transformed block; and generating a reconstructed block using the predicted block and the residual block, in which the merge list contains a predetermined number of merge candidates among the available spatial and temporal merge candidates, the quantization parameter is derived per a quantization unit, a minimum size of the quantization unit is adjusted per picture, and the quantization parameter is derived using a differential quantization parameter and a quantization parameter predictor. | 2019-02-14 |
20190052880 | DYNAMIC ALLOCATION OF CPU CYCLES VIS-A-VIS VIRTUAL MACHINES IN VIDEO STREAM PROCESSING - Approaches for dynamically allocating CPU cycles for use in processing a video stream. Video complexity information for two or more digital video streams actively being processed by one or more video encoders is determined at periodic intervals. Video complexity information describes the complexity of digital video carried by the digital video streams across a bounded number of consecutive digital frames which includes digital frames not yet processed by the one or more video encoders. A determination is made as to whether a number of CPU cycles allocated for processing a particular digital video stream should be adjusted based on the determined video complexity information. The number of CPU cycles allocated for processing the particular digital video stream may be dynamically adjusted by changing an amount of CPU cycles allocated to a virtual machine in which the stream is processed or by processing the stream in a different virtual machine. | 2019-02-14 |
20190052881 | A METHOD AND DEVICE FOR INTRA-PREDICTIVE ENCODING/DECODING A CODING UNIT COMPRISING PICTURE DATA, SAID INTRA-PREDICTIVE ENCODING DEPENDING ON A PREDICTION TREE AND A TRANSFORM TREE - The present principles relates to a method for intra-predictive encoding a coding unit comprising picture data, said intra-predictive encoding depending on a prediction tree and a transform tree, characterized in that the method further comprises: —obtaining said prediction tree by spatially partitioning the coding unit according to a non-square partition type; —determining said transform tree from said coding unit in order that each of its leaves is embedded into a unique unit of said obtained prediction tree; and —signaling in a signal the size of the leaves of said transform tree and said a non-square partition type. | 2019-02-14 |
20190052882 | VIDEO ENCODING APPARATUS AND VIDEO ENCODING METHOD - A first encoder divides an encoding target image included in a video into a plurality of blocks, and encodes each of the plurality of blocks by performing a prediction coding by use of filtering processing. A second encoder encodes a parameter that represents a direction of a line of pixels in the filtering processing. Here, when an encoding target block that is one of the plurality of blocks has a rectangular shape, the second encoder changes, according to a direction of a long side of the rectangular shape, a process of encoding the parameter. | 2019-02-14 |
20190052883 | ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD - An encoder includes processing circuitry, a block memory, and a frame memory. The processing circuitry defines at least one parameter for each of plural types of segment_ids, splits an image into blocks, assigns, to each of the blocks, segment_id according to a type of the block, among the plural types of segment_ids, and sequentially encodes the blocks. In encoding the blocks, the processing circuitry identifies segment_id of a current block to be encoded, and encodes the current block using the at least one parameter defined for identified segment_id. The at least one parameter includes seg_context_idx for identifying probability information associated with context used in context-based adaptive binary arithmetic coding (CABAC). | 2019-02-14 |
20190052884 | VIDEO DECODER WITH ENHANCED CABAC DECODING - A decoder receives a bitstream containing quantized coefficients representative of blocks of video representative of a plurality of pixels and decodes the bitstream using context adaptive binary arithmetic coding. The context adaptive binary arithmetic coding decodes the current syntax element using a first mode if the current syntax element is intra-coded and if selecting between a first set of probable modes and a second set of probable modes, where the first set of probable modes are more likely than the second set of probable modes. The context adaptive binary arithmetic coding decodes the current syntax element using a second mode if the current syntax element is intra-coded and if selecting among one of the second set of probable modes. | 2019-02-14 |
20190052885 | IMAGE PROCESSING APPARATUS AND IMAGE PROCESSING METHOD - The present disclosure relates to an image processing apparatus and an image processing method that can improve performance at a time of performing special reproduction. A subsampling information determination section | 2019-02-14 |
20190052886 | Intra Merge Prediction - A combined prediction mode for encoding or decoding a pixel block of a video picture is provided. When it is determined that the combined prediction mode is used, a video codec generates an intra predictor for the current block based on a selected intra-prediction mode and a merge-indexed predictor for the current block based on a selected merge candidate from a merge candidates list. The video codec then generates a final predictor for the current block based on the intra predictor and the merge-indexed predictor. The final predictor is then used to encode or decode the current block. | 2019-02-14 |
20190052887 | METHOD FOR ENCODING RAW HIGH FRAME RATE VIDEO VIA AN EXISTING HD VIDEO ARCHITECTURE - A system for transporting fast frame rate video data from a high frame rate image sensor mosaics and spreads the fast frame rate video data in 1920×1080p30 video frames for transporting via an existing standard video architecture. Packing information, spreading information, and unique ID/timestamps for each frame is encoded in metadata and inserted in ancillary metadata space of the 1080p30 video frames. A robust encoding scheme generates the metadata and ensures that the transported video can be reassembled into its original fast frame rate form after being spread over multiple channels. | 2019-02-14 |
20190052888 | METHODS AND APPARATUS FOR INTRA CODING A BLOCK HAVING PIXELS ASSIGNED TO GROUPS - Methods and apparatus are provided for intra coding a block having pixels assigned to groups. An apparatus includes a video encoder for encoding a block in a picture using intra prediction by dividing pixels within the block into at least a first group and a second group and encoding the pixels in the first group prior to encoding the pixels in the second group. A prediction for at least one of the pixels within the second group is obtained by evaluating the pixels within the first group and the second group. | 2019-02-14 |
20190052889 | TRANSMISSION DEVICE, TRANSMISSION METHOD, RECEPTION DEVICE, AND RECEPTION METHOD - For example, 120 Hz display can be favorably performed on a reception side even in the case of encoding and transmitting moving image data of 24 Hz at the frame rate of 60 Hz. | 2019-02-14 |
20190052890 | MULTI-LEVEL SIGNIFICANCE MAP SCANNING - Methods of encoding and decoding for video data are described in which multi-level significance maps are used in the encoding and decoding processes. The significant-coefficient flags that form the significance map are grouped into contiguous groups, and a significant-coefficient-group flag signifies for each group whether that group contains no non-zero significant-coefficient flags. A multi-level scan order may be used in which significant-coefficient flags are scanned group-by-group. The group scan order specifies the order in which the groups are processed, and the scan order specifies the order in which individual significant-coefficient flags within the group are processed. The bitstream may interleave the significant-coefficient-group flags and their corresponding significant-coefficient flags, if any. | 2019-02-14 |
20190052891 | SELF-SIMILAR REFERENCE MASKS FOR PERSISTENCY IN A VIDEO STREAM - In one embodiment, a method including dividing a reference mask into a plurality of reference mask divisions, determining a plurality of motion vectors respectively associated with a plurality of slice divisions, wherein the plurality of reference mask divisions respectively correspond to the plurality of slice divisions, modifying a blurring kernel in accordance with the plurality of motion vectors, yielding a plurality of modified blurring kernels that are respectively associated with the plurality of slice divisions, and performing at least one action to yield an altered reference mask, including for the plurality of reference mask divisions and the plurality of modified blurring kernels: convolving a reference mask division with a weighted function of at least a modified blurring kernel associated with a slice division, of the plurality of slice divisions, to which the reference mask division corresponds. | 2019-02-14 |
20190052892 | HIGH DYNAMIC RANGE CODECS - A method for encoding high dynamic range (HDR) images involves providing a lower dynamic range (LDR) image, generating a prediction function for estimating the values for pixels in the HDR image based on the values of corresponding pixels in the LDR image, and obtaining a residual frame based on differences between the pixel values of the HDR image and estimated pixel values. The LDR image, prediction function and residual frame can all be encoded in data from which either the LDR image of HDR image can be recreated. | 2019-02-14 |
20190052893 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - A difference detecting unit ( | 2019-02-14 |
20190052894 | METHOD FOR GENERATING PREDICTION BLOCK IN AMVP MODE - A device configured to encode video data is discussed. The device includes a generator configured to generate a prediction block of a current prediction unit using a reference index and a motion vector of the current prediction unit, and a residual block using a difference between the current prediction unit and the prediction block; a transformer configured to transform the residual block to generate a transform block; a quantizer configured to quantize coefficients of the transform block to generate a quantization block using a quantization parameter. Further, the quantizer generates the quantization block by selecting two effective quantization parameters that are available and exist among left, upper, and previous quantization parameters according to an order of priority levels set for the left, upper, and previous quantization parameters and using an average of the two effective quantization parameters; and an entropy-coder configured to entropy-code the quantization block using a scan pattern. | 2019-02-14 |
20190052895 | EFFICIENT ROUNDING FOR DEBLOCKING - The present disclosure relates to deblocking filtering which is applicable to smoothing the block boundaries in an image or video coding and decoding. In particular, the deblocking filtering is either strong or weak, wherein the clipping is performed differently in the strong filtering and the weak filtering. | 2019-02-14 |
20190052896 | INTRA BLOCK COPY (INTRABC) COST ESTIMATION - A method for encoding video data is provided that includes determining whether or not a parent coding unit of a coding unit of the video data was predicted in intra-prediction block copy (IntraBC) mode and, when it is determined that the parent coding unit was not predicted in IntraBC mode: computing activity of the coding unit, determining an IntraBC coding cost of the coding unit by computing the IntraBC coding cost of the coding unit using a two dimensional (2D) search when the activity of the coding unit is not than an activity threshold, and computing the IntraBC coding cost of the coding unit using a one dimensional (1D) search when the activity of the coding unit is less than the activity threshold, using the IntraBC coding cost to select an encoding mode from one of a plurality of encoding modes, encoding the coding unit using the selected encoding mode. | 2019-02-14 |
20190052897 | COMPOUND MOTION-COMPENSATED PREDICTION - A prediction scheme is selected for encoding or decoding a video block. A first compound motion block can be determined by weighting distances from a first reference frame to the video frame and from a second reference frame to the video frame using one or more quantized weighting coefficients. A second compound motion block can be determined based on an average of pixel values a video block of the first reference frame and pixel values from a video block of the second reference frame. One of the first compound motion block or the second compound motion block is selected and used to generate a prediction block. Alternatively, data encoded to a bitstream including the video frame can be used to determine which compound motion block to use to generate the prediction block. The current block of the video frame is then encoded or decoded using the prediction block. | 2019-02-14 |
20190052898 | ENHANCED INTRA-PREDICTION CODING USING PLANAR REPRESENTATIONS - The present invention provides low complexity planar mode coding in which a value of a bottom-right pixel in a prediction block is calculated from a value of at least one pixel in at least one of an array of horizontal boundary pixels and an array of vertical boundary pixels. Linear and bi-linear interpolations on the value of the bottom-right pixel and values of at least some of the horizontal and vertical boundary pixels to derive values of remaining pixels in the prediction block. A residual between the prediction block and an original block is signaled to a decoder. | 2019-02-14 |
20190052899 | ENHANCED INTRA-PREDICTION CODING USING PLANAR REPRESENTATIONS - The present invention provides low complexity planar mode coding in which a value of a bottom-right pixel in a prediction block is calculated from a value of at least one pixel in at least one of an array of horizontal boundary pixels and an array of vertical boundary pixels. Linear and bi-linear interpolations on the value of the bottom-right pixel and values of at least some of the horizontal and vertical boundary pixels to derive values of remaining pixels in the prediction block. A residual between the prediction block and an original block is signaled to a decoder. | 2019-02-14 |
20190052900 | ENHANCED INTRA-PREDICTION CODING USING PLANAR REPRESENTATIONS - The present invention provides low complexity planar mode coding in which a value of a bottom-right pixel in a prediction block is calculated from a value of at least one pixel in at least one of an array of horizontal boundary pixels and an array of vertical boundary pixels. Linear and bi-linear interpolations on the value of the bottom-right pixel and values of at least some of the horizontal and vertical boundary pixels to derive values of remaining pixels in the prediction block. A residual between the prediction block and an original block is signaled to a decoder. | 2019-02-14 |
20190052901 | ENHANCED INTRA-PREDICTION CODING USING PLANAR REPRESENTATIONS - The present invention provides low complexity planar mode coding in which a value of a bottom-right pixel in a prediction block is calculated from a value of at least one pixel in at least one of an array of horizontal boundary pixels and an array of vertical boundary pixels. Linear and bi-linear interpolations on the value of the bottom-right pixel and values of at least some of the horizontal and vertical boundary pixels to derive values of remaining pixels in the prediction block. A residual between the prediction block and an original block is signaled to a decoder. | 2019-02-14 |
20190052902 | ENHANCED INTRA-PREDICTION CODING USING PLANAR REPRESENTATIONS - The present invention provides low complexity planar mode coding in which a value of a bottom-right pixel in a prediction block is calculated from a value of at least one pixel in at least one of an array of horizontal boundary pixels and an array of vertical boundary pixels. Linear and bi-linear interpolations on the value of the bottom-right pixel and values of at least some of the horizontal and vertical boundary pixels to derive values of remaining pixels in the prediction block. A residual between the prediction block and an original block is signaled to a decoder. | 2019-02-14 |
20190052903 | LOGICAL INTRA MODE NAMING IN HEVC VIDEO CODING - A method and apparatus of using logical mode numbers during both prediction and coding in the bit stream, such as for high efficiency video coders (HEVC). These logical intra mode numbers are sorted based on angle which as a result leads to improved coding designs with fewer and smaller look-up tables, and a small gain in coding efficiency. Furthermore, by using this type of naming, the number of most probable modes (MPMs) can be readily extended since no additional tables are required. The use of three MPMs achieves a larger gain of 0.25% and 0.31% for the AI_HE and AI_LC cases, respectively. | 2019-02-14 |
20190052904 | SYSTEMS AND METHODS FOR MODEL PARAMETER OPTIMIZATION IN THREE DIMENSIONAL BASED COLOR MAPPING - Systems, methods, and devices are disclosed for performing adaptive color space conversion and adaptive entropy encoding of LUT parameters. A video bitstream may be received and a first flag may be determined based on the video bitstream. The residual may be converted from a first color space to a second color space in response to the first flag. The residual may be coded in two parts separated by the most significant bits and least significant bits of the residual. The residual may be further coded based on its absolute value. | 2019-02-14 |
20190052905 | DECODING DEVICE, ENCODING DEVICE, AND DECODING METHOD - The amount of processing is reduced with high coding efficiency maintained. There is provided an arithmetic decoding device including syntax decoding means for decoding each of at least a first syntax element and a second syntax element indicating a transform coefficient using arithmetic decoding with a context or arithmetic decoding without a context. The syntax decoding means performs decoding that at least includes not decoding the first syntax element and decoding the second syntax element using the arithmetic decoding without a context, and decoding the first syntax element using the arithmetic decoding with a context and decoding the second syntax element using the arithmetic decoding without a context. | 2019-02-14 |
20190052906 | IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, AND IMAGE DECODING APPARATUS - An image coding method includes selecting two or more transform components from among a plurality of transform components that include a translation component and non-translation components, the two or more transform components serving as reference information that represents a reference destination of a current block; coding selection information that identifies the two or more transform components that have been selected from among the plurality of transform components; and coding the reference information of the current block by using reference information of a coded block different from the current block. | 2019-02-14 |
20190052907 | IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, AND IMAGE DECODING APPARATUS - An image coding method includes selecting two or more transform components from among a plurality of transform components that include a translation component and non-translation components, the two or more transform components serving as reference information that represents a reference destination of a current block; coding selection information that identifies the two or more transform components that have been selected from among the plurality of transform components; and coding the reference information of the current block by using reference information of a coded block different from the current block. | 2019-02-14 |
20190052908 | OPTIMIZING HIGH DYNAMIC RANGE IMAGES FOR PARTICULAR DISPLAYS - To enable practical and quick generation of a family of good looking HDR gradings for various displays on which the HDR image may need to be shown, we describe a color transformation apparatus ( | 2019-02-14 |
20190052909 | IMAGE ENCODING METHOD AND APPARATUS, AND IMAGE DECODING METHOD AND APPARATUS - Provided are an image encoding/decoding apparatus and method using data hiding, in which, when a difference between scan positions of a final effective transform coefficient and an initial effective transform coefficient of a sub-block of a current transform unit is greater than a threshold value, an intra-prediction direction of a current coding unit is determined using parity of the sum of transform coefficients of the sub-block corresponding to certain scan positions or a level of an effective transform coefficient among transform coefficients of the sub-block is corrected such that the parity of the sum of the transform coefficients indicates the intra-prediction direction of the current coding unit. Encoding and decoding efficiencies may be improved by reducing a bitrate by using a method of hiding data in parity of effective transform coefficients. | 2019-02-14 |
20190052910 | SIGNALING PARAMETERS IN VIDEO PARAMETER SET EXTENSION AND DECODER PICTURE BUFFER OPERATION - A system for encoding and/or decoding a video bitstream that includes a base bitstream and enhancement bitstreams representative of a video sequence. The receiver receives a video parameter set and a video parameter set extension, where the video parameter set extension includes decoder picture buffer parameters. | 2019-02-14 |
20190052911 | LOOP RESTORATION FILTERING FOR SUPER RESOLUTION VIDEO CODING - Techniques related to selecting restoration filter coefficients for 2-dimensional loop restoration filters for super resolution video coding are discussed. Such techniques include upscaling reconstructed video frames along only a first dimension and selecting filter coefficients for portions of the frame by an evaluation that, for each pixel of the portion, uses only pixel values that are aligned with the first dimension. Selection of filter coefficients for the second dimension may be skipped or made using only a subset of available filter coefficients. | 2019-02-14 |
20190052912 | DIRECTIONAL DEBLOCKING FILTER - Multiple directional filters are applied against lines of pixels associated with a video block to determine filtered noise values. Each directional filter uses a different direction for filtering lines of pixels. For example, for each pixel value of the video block along a line of pixels having a direction corresponding to a directional filter, a difference can be determined between the pixel value and a corresponding pixel value along the line of pixels and outside of the video block. A value for line of pixels is determined as the sum of the absolute values of each of the differences, and a filtered noise value is determined as the sum of the values for the lines of pixels. The directional filter used to determine a lowest one of the filtered noise values for the video block is then selected. The video block is filtered using the selected directional filter. | 2019-02-14 |