52nd week of 2020 patent applcation highlights part 74 |
Patent application number | Title | Published |
20200404202 | FLUORESCENCE IMAGING WITH FIXED PATTERN NOISE CANCELLATION - Fluorescence imaging with reduced fixed pattern noise is disclosed. A method includes actuating an emitter to emit a plurality of pulses of electromagnetic radiation and sensing reflected electromagnetic radiation resulting from the plurality of pulses of electromagnetic radiation with a pixel array of an image sensor. The method includes reducing fixed pattern noise in an exposure frame by subtracting a reference frame from the exposure frame. The method is such that at least a portion of the plurality of pulses of electromagnetic radiation emitted by the emitter comprises electromagnetic radiation having a wavelength from about 795 nm to about 815 nm. | 2020-12-24 |
20200404203 | IMAGE SENSOR FOR COMPENSATING FOR SIGNAL DIFFERENCE BETWEEN PIXELS - An image sensor includes two or more phase-difference detection pixels disposed adjacent to each other, a plurality of general pixels spaced apart from the phase-difference detection pixels, first and second peripheral pixels, and first to third light shields. The first and second peripheral pixels are adjacent to the phase-difference detection pixels, and between the phase-difference detection pixels and the general pixels. The first light shield is disposed in one of the general pixels and has a first width. The second light shield extends into the first peripheral pixel from a first area between the phase-difference detection pixels and the first peripheral pixel, and has a second width different from the first width. The third light shield extends into the second peripheral pixel from a second area between the phase-difference detection pixels and the second peripheral pixel, and has a third width different from the first width. | 2020-12-24 |
20200404204 | IMAGING DEVICE AND DIAGNOSIS METHOD - An imaging device according to the present disclosure includes: a plurality of pixels each including a first light-receiving element and a second light-receiving element, the plurality of pixels including a first pixel; a generating section that is able to generate a first detection value on a basis of a light-receiving result by the first light-receiving element of each of the plurality of pixels, and is able to generate a second detection value on a basis of a light-receiving result by the second light-receiving element of each of the plurality of pixels; and a diagnosis section that is able to perform a diagnosis processing on a basis of a detection ratio that is a ratio between the first detection value and the second detection value in the first pixel. | 2020-12-24 |
20200404205 | IMAGE SENSOR, COLUMN PARALLEL ADC CIRCUIT AND A/D CONVERSION METHOD THEREOF - A column parallel ADC circuit includes: plural column ADCs and a digital processing circuit. The plural column ADCs generate respective plural digital counts. The plural column ADCs include a first column ADC and a second column ADC. The first column ADC generates a first digital count according to a first analog signal, and the second column ADC generates a second digital count according to a second analog signal. The first digital count is a difference between a first digital signal and a second digital signal. The first and the second digital signals correspond to the first and the second analog signals respectively. The digital processing circuit generates the plural digital signals, wherein the digital processing circuit generates the first digital signal according to the first digital count and the second digital signal. | 2020-12-24 |
20200404206 | READOUT CIRCUIT, IMAGE SENSOR, AND ELECTRONIC DEVICE - Embodiments of the present application provide a readout circuit, an image sensor and an electronic device, which could effectively reduce an area and power consumption of the image sensor. The readout circuit includes a plurality of capacitors, a switch circuit and an output circuit; where the plurality of capacitors are connected to the output circuit through the switch circuit; the plurality of capacitors are configured to store output signals of a plurality of pixel circuits, respectively; and the output circuit is configured to output signals stored by the plurality of capacitors through the switch circuit one-by-one. | 2020-12-24 |
20200404207 | PIXEL ARRAY AND IMAGE SENSOR - A pixel array includes a plurality of pixels. Each of the pixels includes a photoelectric element formed on a substrate and that generates charge from light, and a pixel circuit formed between the photoelectric element and the substrate and that outputs a digital signal value based on an amount of the generated charge. The pixel circuit includes a floating diffusion formed in the substrate and that stores the charge therein, a vertical pixel electrode that connects the floating diffusion to the photoelectric element and extends in a direction perpendicular to the substrate, an analog-to-digital converter that converts an electric potential of the floating diffusion into the digital signal value, and a memory element that stores the digital signal value. | 2020-12-24 |
20200404208 | ELECTRONIC CIRCUIT FOR CONFIGURING AMPLIFYING CIRCUIT CONFIGURED TO OUTPUT VOLTAGE INCLUDING LOW NOISE - An electronic circuit is provided. The electronic circuit includes a first current generating circuit configured to output a first operating current based on a first operating voltage; and an input circuit configured to: receive a first current corresponding to a first input voltage and a second current corresponding to a second input voltage, wherein the first current and the second current are based on the first operating current; receive a third current and a fourth current that are generated based on the first operating voltage; and generate a fifth current corresponding to the second input voltage based on a second operating current. The electronic circuit is configured to generate an output voltage that is associated with a difference between the first input voltage and the second input voltage based on the second current, the fourth current and the fifth current, and the fourth current corresponds to the third current. | 2020-12-24 |
20200404209 | PHOTOELECTRIC CONVERSION APPARATUS, PHOTOELECTRIC CONVERSION SYSTEM, AND MOBILE OBJECT - A photoelectric conversion apparatus includes a plurality of pixels, a generation unit configured to generate a first reference signal changing in potential in a first period from a first point in time to a second point in time, and a second reference signal having a gradient larger than a gradient of the first reference signal and changing in potential in a second period from a third point in time to a fourth point in time, and a plurality of analog to digital (AD) conversion units. Each of the AD conversion units includes a selection circuit configured to select the first reference signal or the second reference signal, and a comparator. The first period and the second period partially overlap with each other. The third point in time is later than the first point in time. | 2020-12-24 |
20200404210 | IMAGING DEVICE - An imaging device supplies a first constant potential and a second constant potential to a photodiode through a first line and a second line to put the photodiode in a reverse-bias state. The imaging device reads a signal corresponding to the potential at the other end of the photodiode changed by light incident on the photodiode in the reverse-bias state. The imaging device supplies a potential that changes with time to a capacitive element through a control line so that a forward current flows through the photodiode disposed between the capacitive element and the first line after reading the signal. | 2020-12-24 |
20200404211 | SOLID-STATE IMAGING ELEMENT AND ELECTRONIC DEVICE - The present disclosure relates to a solid-state imaging element and an electronic device that enable performance to be further improved. A pixel at least includes: a photoelectric conversion unit configured to perform photoelectric conversion; an FD unit to which charge generated in the photoelectric conversion unit is transferred; and an amplification transistor that has a gate electrode to which the FD unit is connected. A reference signal is input to a MOS transistor. The reference signal is referred to when AD conversion is performed on a pixel signal according to an amount of light received by the pixel. Then, a shared structure is employed in which a predetermined number of pixels share an AD converter that includes a differential pair including the MOS transistor and the amplification transistor. Each of the pixels is provided with a selection transistor that is used to select a pixel for which AD conversion is performed on the pixel signal. The present technology can be applied to, for example, a CMOS image sensor. | 2020-12-24 |
20200404212 | BIDIRECTIONAL VIDEO COMMUNICATION SYSTEM AND OPERATOR TERMINAL - In order to enable a user operating a kiosk terminal to focus on the operation of the kiosk terminal while virtually eliminating any concerns about prying eyes, a controller of an operator terminal is configured such that, when a communication device receives a frontal-face video of the user, the frontal-face video being shot by a camera of the kiosk terminal such that the frontal-face video includes a background of the user, the controller displays the frontal-face video of the user on a monitor. The controller also performs face authentication on a person in the frontal-face video of the user, and upon detecting a suspicious person through the face authentication, the controller generates a screen in which identification information distinguishably identifying the user and the suspicious person has been added to the frontal-face video of the user, and displays the screen on the monitor. | 2020-12-24 |
20200404213 | DISPLAY DEVICE - Provided is a display device. The display device includes a display panel including two long sides extending in a first direction and two short sides extending in a second direction, a first sound generator on a first area of one surface of the display panel, the first sound generator outputs a first sound by vibrating the display panel, a second sound generator on a second area of the one surface of the display panel, the second sound generator outputs a second by vibrating the display panel, a bottom frame on the one surface of the display panel, and a first blocking member between the one surface of the display panel and the bottom frame and along edges of the display panel, the first blocking member includes at least one opening. | 2020-12-24 |
20200404214 | AN APPARATUS AND ASSOCIATED METHODS FOR VIDEO PRESENTATION - An apparatus configured to; based on first video content comprising video imagery being captured by a first camera of a device and second video content comprising video imagery being captured by a second camera of the same device, the first and second video content captured contemporaneously and based on a slow-motion-mode input selecting one of the first and second video content for playback in slow-motion, provide for simultaneous display of: a live, first preview comprising video imagery of a non-selected one of the first and second video content as it is captured and presented at a first play rate substantially equal to a capture rate at which the non-selected video content was captured; and a second preview comprising video imagery of the selected one of the first and second video content presented at a second play rate slower than a capture rate at which the selected video content was captured. | 2020-12-24 |
20200404215 | TRANSMISSION DEVICE, TRANSMISSION METHOD, RECEPTION DEVICE, AND RECEPTION METHOD - The present technology ensures that electrooptical conversion processing for transmission video data obtained using an HDR optoelectrical conversion characteristic is favorably carried out at a receiving side. The transmission video data is obtained by performing high dynamic range optoelectrical conversion on high dynamic range video data. A video stream is obtained by applying encoding processing to this transmission video data. A container in a predetermined format including this video stream is transmitted. Meta information indicating an electrooptical conversion characteristic corresponding to a high dynamic range optoelectrical conversion characteristic is inserted into a parameter set field in the video stream. | 2020-12-24 |
20200404216 | Content-Modification System with Determination of Input-Buffer Switching Delay Feature - In one aspect, a method for use in connection with a content-presentation device including a first input buffer, a second input buffer, and an output buffer, wherein the content-presentation device is configured such that content from either the first input buffer or the second input buffer can be communicated to the output buffer, includes: (i) receiving, from the content-presentation device, an identifier associated with the content-presentation device; (ii) using mapping data to map the received identifier to a baseline input-to-output delay, which represents a time-period between when content is input into the first input buffer and output by the output buffer; and (iii) transmitting, to the content-presentation device, the mapped baseline input-to-output delay to the content-presentation device to facilitate the content-presentation device (a) determining an input-buffer switching delay, and (b) using the determined input-buffer switching delay to facilitate performing a content-modification operation. | 2020-12-24 |
20200404217 | VIRTUAL PRESENCE SYSTEM AND METHOD THROUGH MERGED REALITY - A virtual presence merged reality system comprises a server comprising at least one processor and memory including a data store storing a persistent virtual world system comprising one or more virtual replicas of real world elements. The virtual replicas provide self-computing capabilities and autonomous behavior. The persistent virtual world system comprises a virtual replica of a physical location hosting a live event, wherein the persistent virtual world system is configured to communicate through a network with a plurality of connected devices that include sensing mechanisms configured to capture real-world data of the live event that enables updating the persistent virtual world system. The system enables guests to virtually visit, interact and make transactions within the live event through the persistent virtual world system. Computer-implemented methods thereof are also provided. | 2020-12-24 |
20200404218 | MERGED REALITY LIVE EVENT MANAGEMENT SYSTEM AND METHOD - An accurate and flexible merged reality system and method configured to enable remotely viewing and participating in real or virtual events. In the merged reality system, at least one portion of the real or a virtual world may be respectively replicated or streamed into corresponding sub-universes comprised within the virtual world system, wherein some of the sub-universes comprise events that guests may view and interact with from one or more associated guest physical locations. Other virtual elements, such as purely virtual objects or graphical representations of applications and games, can also be included in the virtual world system. The virtual objects comprise logic, virtual data and models that provide self-computing capabilities and autonomous behavior. The system enables guests to virtually visit, interact and make transactions within the event through the virtual world system. | 2020-12-24 |
20200404219 | IMMERSIVE INTERACTIVE REMOTE PARTICIPATION IN LIVE ENTERTAINMENT - Systems and methods are described for immersive remote participation in live events hosted by interactive environments and experienced by users in immersive realities. Accordingly, a system for immersive remote participation in live events includes a plurality of interactive environments hosting live events and including recording equipment; transmitters coupled to interactive environments; a cloud server having at least a processing unit, memory and at least one renderer, the processing unit being configured to process the respective recordings, to generate interactive volumes on one or more interactive elements, to generate immersive experiences for viewers utilizing the processed data and interactive volumes, and to process viewer feedback to one or more of the plurality of interactive environments, and the renderer being configured to render image data from the immersive experiences to generate media streams that are sent to the one or more viewers; and one or more interaction devices configured to receive the processed and rendered media streams having the immersive experiences and to input feedback to the live interactive environments sent by the cloud server. | 2020-12-24 |
20200404220 | IMAGING SYSTEM - An imaging system for a vehicle includes an image sensor for detecting electromagnetic radiation, a first lens unit and a second lens unit for focusing electromagnetic radiation, and at least one transflective or switchable mirror unit, with the at least one transflective or switchable mirror unit being configured to project electromagnetic radiation from at least one of the first lens unit and the second lens unit essentially perpendicularly on the image sensor, where the first lens unit has a first optical axis and the second lens unit has a second optical axis crossing the first optical axis at a crossing point, and the at least one transflective or switchable mirror unit is arranged between the image sensor, the first lens unit, and the second lens unit at the crossing point. | 2020-12-24 |
20200404221 | DYNAMIC VIDEO EXCLUSION ZONES FOR PRIVACY - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for preserving privacy in surveillance. The methods, systems, and apparatus include actions of determining a state of a monitoring system, determining an exclusion zone that is shown in a video, determining whether to obfuscate at least a portion of the video based on the exclusion zone and the state of the monitoring system, and obfuscating at least the portion of the video. | 2020-12-24 |
20200404222 | TRACKING ASSISTANCE DEVICE, TRACKING ASSISTANCE SYSTEM AND TRACKING ASSISTANCE METHOD - The present application enables improved tracking assistance of an object and includes a link score calculator that calculates a link score from tracking information based on an image captured by cameras, a tracking target setter that sets the object to be tracked in accordance with designation by a monitor, a confirmation image presenter that displays an image of an object having a highest evaluation value as a confirmation image, a thumbnail generator that generates a thumbnail of each object, a candidate image presenter that displays a thumbnail of each object having a lower evaluation value than an object of an erroneous confirmation image and allows the monitor to select a candidate image corresponding to the object to be tracked, and a tracking information corrector that corrects inter-camera tracking information such that the object of the selected candidate image is associated with the object to be tracked. | 2020-12-24 |
20200404223 | MOTION DETECTION METHODS AND MOTION SENSORS CAPABLE OF MORE ACCURATELY DETECTING TRUE MOTION EVENT - A motion detection method utilized for a motion sensor includes: capturing a monitoring image; entering a recording mode when one intensity change value between the monitoring image and a pre-stored background image is higher than a first threshold and a trigger signal is received from an auxiliary sensor; and entering the recording mode when the intensity change value is higher than a second threshold before the trigger signal is received; wherein the second threshold is higher than the first threshold. | 2020-12-24 |
20200404224 | MULTISPECTRUM, MULTI-POLARIZATION (MSMP) FILTERING FOR IMPROVED PERCEPTION OF DIFFICULT TO PERCEIVE COLORS - In one embodiment, a method includes accessing first image data generated by a first image sensor having a first filter array that has a first filter pattern. The first filter pattern includes a first filter type corresponding to a spectrum of interest and a second filter type. The method also includes accessing second image data generated by a second image sensor having a second filter array that has a second filter pattern different from the first filter pattern. The second filter pattern includes a number of second filter types, the number of second filter types and the number of first filter types have at least one filter type in common. The method also includes determining a correspondence between one or more first pixels of the first image data and one or more second pixels of the second image data. | 2020-12-24 |
20200404225 | Methods and Systems to Pre-Warp and Image - There is provided a method including obtaining an initial spatial coordinate of a pixel of an image to be projected. The method may also include generating a pre-warped spatial coordinate associated with the initial spatial coordinate. The pre-warped spatial coordinate may be calculated as a sum of a value of a warp path function at the initial spatial coordinate and a delta. Moreover, the method may include outputting the pre-warped spatial coordinate. | 2020-12-24 |
20200404226 | IMAGE DISPLAY DEVICE - An image display device includes M (M≥2) ultra-short focus projectors and reflective directional screens that reflect projection light rays from the corresponding ultra-short focus projectors. The ultra-short focus projectors are arranged above or below the directional screens on which the ultra-short focus projectors are supposed to project images. Adjacent directional screens are tightly arranged. The adjacent directional screens are arranged to form an angle of less than 180 degrees. | 2020-12-24 |
20200404227 | PROJECTION DEVICE AND PHOTO COUPLER CIRCUIT FOR THE SAME - A projection device and a photo coupler circuit for the same are disclosed. The photo coupler circuit includes a logic unit, a number of integration units and a selection unit. The logic unit is configured to receive a number of first control signals and a number of first PWM signals from a main circuit of the projection device, and to output a number of second PWM signals and one or more second control signals according to the first control signals and the first PWM signals. The integration units are coupled to the logic unit. Each of the integration units is configured to generate an integration signal according to one of the second PWM signals. The selection unit is coupled to the integration units to select one of the integration signals to be output to a light source drive circuit of the projection device according to the second control signals. | 2020-12-24 |
20200404228 | IMAGE PAINTING WITH MULTI-EMITTER LIGHT SOURCE - A scanning projector display includes a light engine comprising N emitters coupled to a collimator for providing a fan of N light beams of variable optical power levels, where N>1. The N emitters are spaced apart from each other such that pixels of the image simultaneously energized by neighboring ones of the N emitters are non-adjacent. A scanner receives and angularly scans the fan of N light beams about first and second non-parallel axes to provide an image in angular domain. A controller coupled to the scanner and the light engine causes the scanner to simultaneously scan the fan of N light beams about the first and second axes, and cause the light engine to vary the optical power levels of the N emitters with time delays such that adjacent pixels of the image are energized by different ones of the N emitters. | 2020-12-24 |
20200404229 | Systems, devices, and Methods for driving projectors - Systems, devices, and methods for driving projectors are described. The actual area projected over by a laser projector for a given pixel may not exactly match a desired projection area for the pixel, especially at edge regions of an image. In the present systems, devices, and methods, projection data is provided for at least one image to be projected by a laser projector. The projection data can include sets of alternative data sections at edge regions of the at least one image, effectively increasing resolution for the edge regions of the image. Depending on a projection pattern being used by a laser projector at a given time, select alternative data sections can be projected which closely match the actual area covered by the projection pattern, improving image quality. | 2020-12-24 |
20200404230 | IMAGE DISPLAY APPARATUS - An image display apparatus according to an embodiment of the present technology includes a light source section, a first sensor, a second sensor, and a light source control section. The light source section is capable of emitting emitted light. The first sensor is capable of detecting a state of the emitted light. The second sensor is capable of detecting a temperature of the light source section. The light source control section is capable of controlling the light source section according to a first detection result of detection performed by the first sensor, and a second detection result of detection performed by the second sensor. | 2020-12-24 |
20200404231 | PROJECTION OPTICAL APPARATUS AND PROJECTOR - A projection optical apparatus includes a first lens group including a first optical axis, a first optical path deflector, a second optical path deflector, a second lens group including a second optical axis, a first lens barrel, a second lens barrel and a frame. The frame includes a frame main body and a cover frame. The frame main body includes a lens barrel attachment part including first and second openings, and first and second side parts facing each other and extending in a direction along a plane containing the first optical axis and a third optical axis between the first optical path deflector and the second optical path deflector. The cover frame is disposed at an opposite side to a lens barrel attachment part side with respect to the first and second optical path deflectors, and includes an outer circumferential part fixed to the the first and second side parts. | 2020-12-24 |
20200404232 | METHOD FOR PROJECTING IMAGE AND ROBOT IMPLEMENTING THE SAME - Disclosed is a robot projecting an image that selects a projection area in a space based on at least one of first information related to content of an image to be projected and second information related to a user viewing the image to be projected, and projects the image to the projection area. | 2020-12-24 |
20200404233 | LIGHT SOURCE RESPONSE COMPENSATION FOR LIGHT PROJECTION SYSTEM USING A GRAPHICS PROCESSING UNIT - A light projection system includes a microelectromechanical (MEMS) mirror configured to operate in response to a mirror drive signal and to generate a mirror sense signal as a result of the operation. A mirror driver is configured to generate the mirror drive signal in response to a drive control signal. A zero cross detector is configured to detect zero crosses of the mirror sense signal. A controller is configured to generate the drive control signal as a function of the detected zero crosses of the mirror sense signal. | 2020-12-24 |
20200404234 | METHOD AND DEVICE FOR COLOR GAMUT MAPPING - The present principles relate to a method and device for gamut mapping from a first color gamut towards a second color gamut. The method comprises, in a plane of constant hue, mapping the chroma of the color from the first color gamut towards the second color gamut at constant lightness. The chroma mapping further comprises obtaining a target color on the second color gamut boundary wherein the lightness of the target color is greater than or equal to the lightness of a color of maximum chroma of the first color gamut and wherein the lightness of the target color is lower than the lightness of a color of maximum chroma of the second color gamut. In case where the lightness of the color is greater than the lightness of the target color, the chroma mapping comprises mapping at constant lightness the chroma of a color on the first color gamut boundary by a decreasing function of chroma applied to lightness, wherein the respective outputs of the decreasing function applied to the lightness of the target color and to the lightness of the white are the chroma of the target color and the chroma of the white. | 2020-12-24 |
20200404235 | DEVICE FOR AND METHOD OF CORRECTING WHITE BALANCE OF IMAGE - An electronic device includes a camera configured to capture an image and generate an electrical image signal by photoelectrically converting incident light, and at least one processor configured to estimate a light source color temperature based on image data of the image signal, identify a target coordinate on a color space based on the light source color temperature, identify a capture coordinate on the color space based on the image, identify a chrominance allowable range based on the target coordinate, obtain a white balance evaluation value based on a first distance between the target coordinate and the chrominance allowable range and a second distance between the capture coordinate and the chrominance allowable range, and perform a white balance correction on the image based on the white balance evaluation value. | 2020-12-24 |
20200404236 | Methods and Apparatus for Supporting Content Generation, Transmission and/or Playback - Methods and apparatus for supporting the capture of images of surfaces of an environment visible from a default viewing position and capturing images of surfaces not visible from the default viewing position, e.g., occluded surfaces, are described. Occluded and non-occluded image portions are packed into one or more frames and communicated to a playback device for use as textures which can be applied to a model of the environment where the images were captured. An environmental model includes a model of surfaces which are occluded from view from a default viewing position but which may be viewed is the user shifts the user's viewing location. Occluded image content can be incorporated directly into a frame that also includes non-occluded image data or sent in frames of a separate, e.g., auxiliary content stream that is multiplexed with the main content stream which communicates image data corresponding to non-occluded environmental portions. | 2020-12-24 |
20200404237 | VOLUMETRIC VIDEO-BASED AUGMENTATION WITH USER-GENERATED CONTENT - A processing system having at least one processor may obtain a two-dimensional source video, select a volumetric video associated with at least one feature of the source video from a library of volumetric videos, identify a first object in the source video, and determine a location of the first object within a space of the volumetric video. The processing system may further obtain a three-dimensional object model of the first object, texture map the first object to the three-dimensional object model of the first object to generate an enhanced three-dimensional object model of the first object, and modify the volumetric video to include the enhanced three-dimensional object model of the first object in the location of the first object within the space of the volumetric video. | 2020-12-24 |
20200404238 | IMAGE PROCESSING DEVICE, CONTENT PROCESSING DEVICE, CONTENT PROCESSING SYSTEM, AND IMAGE PROCESSING METHOD - In a depth image compressing section of an image processing device, a depth image operation section generates a depth image by operation using photographed stereo images. A difference image obtaining section generates a difference image between an actually measured depth image and the computed depth image. In a depth image decompressing section of a content processing device, a depth image operation section generates a depth image by operation using the transmitted stereo images. A difference image adding section restores a depth image by adding the computed depth image to the transmitted difference image. | 2020-12-24 |
20200404239 | 3D VIDEO ENCODING AND DECODING METHODS AND APPARATUS - Methods and apparatus relating to encoding and decoding stereoscopic (3D) image data, e.g., left and right eye images, are described. Various pre-encoding and post-decoding operations are described in conjunction with difference based encoding and decoding techniques. In some embodiments left and right eye image data is subject to scaling, transform operation(s) and cropping prior to encoding. In addition, in some embodiments decoded left and right eye image data is subject to scaling, transform operations(s) and filling operations prior to being output to a display device. Transform information and/or scaling information may be included in a bitstream communicating encoded left and right eye images. The amount of scaling can be the same for an entire scene and/or program. | 2020-12-24 |
20200404240 | GENERATION APPARATUS, GENERATION METHOD, AND STORAGE MEDIUM - A generation apparatus according to the present invention is a generation apparatus for generating a media file storing virtual viewpoint image data generated based on pieces of image data of an object captured from a plurality of directions with a plurality of cameras, and obtains a virtual viewpoint parameter to be used to generate virtual viewpoint image data. Further, the generation apparatus generates a media file storing the obtained virtual viewpoint parameter and virtual viewpoint image data generated based on the virtual viewpoint parameter. In this way, the generation apparatus can improve usability related to a virtual viewpoint image. | 2020-12-24 |
20200404241 | PROCESSING SYSTEM FOR STREAMING VOLUMETRIC VIDEO TO A CLIENT DEVICE - A network processing system obtains a viewport of a client device for volumetric video and a two-dimensional (2D) subframe of a frame of volumetric video is obtained associated with the viewport. Viewports may be obtained from the client device or be predicted. 2D subframes and reduced resolution versions of frames can be transmitted to the client device. A client device may request volumetric video from the network processing system and provides a viewport to the network processing system. The client device may obtain from the network processing system reduced resolution versions of volumetric video frames and 2D subframes in accordance with the viewport. The client device may determine whether a current viewport matches the viewport associated with the obtained 2D subframe and provides a display based on either that subframe (upon a match) or a 2D perspective of the reduced resolution frame associated with the current viewport (if no match). | 2020-12-24 |
20200404242 | IMAGING UNIT AND SYSTEM FOR OBTAINING A THREE-DIMENSIONAL IMAGE - Imaging unit for obtaining a three-dimensional image of an object area, comprising an image sensor constituted by a matrix of sensor elements and a focusing unit for providing an image of said object area on the image sensor, the matrix being covered by a color filter array, and a projection unit for projecting a predetermined pattern toward the object area, the focusing unit and the projection unit having optical axes differing with a known angle, wherein the projection unit is adapted to project a time sequence of patterns toward the object area, the pattern sequence being chosen so as to uniquely define a position along at least one axis perpendicular to the projection axis, over the period defined by the illumination, wherein each sensor element in said matrix is connected to a processing branch adapted to detect the variations in the illumination sequence measured at each sensor element, and calculating from the known angle between the projection and imaging axes, the position in the sensor matrix, and the illumination sequence detected at each sensor element, a three-dimensional coordinate of the imaged point on the surface of the object, and wherein the processing branch is adapted to sample at least one image of said image area and calculate a color image based on said color filter pattern for said at least one image. | 2020-12-24 |
20200404243 | INTRAORAL 3D SCANNER EMPLOYING MULTIPLE MINIATURE CAMERAS AND MULTIPLE MINIATURE PATTERN PROJECTORS - A method for generating a 3D image includes driving structured light projector(s) to project a pattern of light on an intraoral 3D surface, and driving camera(s) to capture images, each image including at least a portion of the projected pattern, each one of the camera(s) comprising an array of pixels. A processor compares a series of images captured by each camera and determines which of the portions of the projected pattern can be tracked across the images. The processor constructs a three-dimensional model of the intraoral three-dimensional surface based at least in part on the comparison of the series of images. Other embodiments are also described. | 2020-12-24 |
20200404244 | Methods and Apparatus for Capturing Images of an Environment - Customer wide angle lenses and methods and apparatus for using such lenses in individual cameras as well as pairs of cameras intended for stereoscopic image capture are described. The lenses are used in combination with sensors to capture different portions of an environment at different resolutions. In some embodiments ground is capture at a lower resolution than sky which is captured at a lower resolution than a horizontal area of interest. Various asymmetries in lenses and/or lens and sensor placement are described which are particularly well suited for stereoscopic camera pairs where the proximity of one camera to the adjacent camera may interfere with the field of view of the cameras. | 2020-12-24 |
20200404245 | MAPPING AND TRACKING SYSTEM WITH FEATURES IN THREE-DIMENSIONAL SPACE - LK-SURF, Robust Kalman Filter, HAR-SLAM, and Landmark Promotion SLAM methods are disclosed. LK-SURF is an image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking using stereo images to produce 3D features can be tracked and identified. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis and the X84 outlier rejection rule. Hierarchical Active Ripple SLAM is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple tracked objects, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of tracked objects, landmarks, and estimated object locations. In Landmark Promotion SLAM, only reliable mapped landmarks are promoted through various layers of SLAM to generate larger maps. | 2020-12-24 |
20200404246 | TIME-OF-FLIGHT IMAGE SENSOR RESOLUTION ENCHANTMENT AND INCREASED DATA ROBUSTNESS USING A BINNING MODULE - A time-of-flight (ToF) image sensor system includes a pixel array, where each pixel of the pixel array is configured to receive a reflected modulated light signal and to demodulate the reflected modulated light signal to generate an electrical signal; a plurality of analog-to-digital converters (ADCs), where each ADC is coupled to at least one assigned pixel of the pixel array and is configured to convert a corresponding electrical signal generated by the at least one assigned pixel into an actual pixel value; and a binning circuit coupled to the plurality of ADCs and configured to generate at least one interpolated pixel, where the binning circuit is configured to generate each of the at least one interpolated pixel based on actual pixel values corresponding to a different pair of adjacent pixels of the pixel array, each of the at least one interpolated pixel having a virtual pixel value. | 2020-12-24 |
20200404247 | SYSTEM FOR AND METHOD OF SOCIAL INTERACTION USING USER-SELECTABLE NOVEL VIEWS - A system for social interaction using a photo-realistic novel view of an event includes a multi-view reconstruction system for developing transmission data of the event a plurality of client-side rendering devices, each rendering device receiving the transmission data from the multi-view reconstruction system and rendering the transmission data as the photo-realistic novel view. A method of social interaction using a photo-realistic novel view of an event includes transmitting by a server side transmission data of the event; receiving by a first user on a first rendering device the data transmission; selecting by the first user a path for rendering on the first rendering device at least on novel view; rendering by the first rendering device the at least one novel view; and saving by the user on the first rendering device novel view date for the at least one novel view. | 2020-12-24 |
20200404248 | SYSTEM AND METHOD FOR COMPRESSED SENSING LIGHT FIELD CAMERA - A light field imaging system and method are presented. The light-field imaging system comprises an optical arrangement configured for collecting an input light field from a scene and projecting collected light on a pixel matrix of a detector unit, the optical arrangement comprising” an optical coder configured for applying angular coding to the light being collected to produce angularly coded light by separating the light being collected into an array of u angular light components corresponding to u different discrete viewpoints of the scene and projecting light from all of said u angular light components onto each of at least some of the pixels in the pixel matrix thereby causing in-pixel summation of the u angular light components on the pixel matrix of the sensor unit. A color filter unit is provided being located in a filtering plane in an optical path of the u angular light components of the angularly coded light and being configured to apply predetermined color coding to the angularly coded light propagating to the pixel matrix to thereby form on the pixel matrix an image of the input light field in a spectro-angular space. | 2020-12-24 |
20200404249 | METHOD AND APPARATUS FOR CORRECTING LENTICULAR DISTORTION - A method includes capturing an image of a lenticular display, generating a lenticular distortion map of the lenticular display using the image, and compensating for a lenticular distortion of the lenticular display using the lenticular distortion map. The lenticular distortion map is a dual-component lenticular distortion map and the image is a fringe pattern. The method compensates for the lenticular distortion by adjusting a swizzle function based on the lenticular distortion map. | 2020-12-24 |
20200404250 | Methods and Apparatus for Displaying Images - Methods and apparatus for using a display in a manner which results in a user perceiving a higher resolution than would be perceived if a user viewed the display from a head on position are described. In some embodiments one or more displays are mounted at an angle, e.g., sometimes in range a range from an angle above 0 degrees to 45 relative to a user's face and thus eyes. The user sees more pixels, e.g., dots corresponding to light emitting elements, per square inch of eye area than the user would see if the user were viewing the display head on due to the angle at which the display or displays are mounted. The methods and display mounting arrangement are well suited for use in head mounted displays, e.g., Virtual Reality (VR) displays for stereoscopic viewing (e.g., 3D) and/or non-stereoscopic viewing of displayed images. | 2020-12-24 |
20200404251 | TRAFFIC LIGHT-TYPE SIGNAL STRENGTH METER/INDICATOR LINKED TO AN ANTENNA AGC CIRCUIT - A signal strength meter/indicator for use with an over-the-air broadcast television signal receiving antenna includes an input bandpass filter circuit, a first preamplifier circuit, a controllable variable attenuator circuit, a second preamplifier circuit, a splitter, an output bandpass filter circuit, a VHF/UHF filter circuit, a power detector circuit, a microcontroller and a display. The microcontroller, the controllable variable attenuator circuit, the second preamplifier circuit, the splitter, the VHF/UHF filter circuit and the power detector circuit together define an AGC circuit which adjustably controls the power level of an output signal provided to a television. The display is also adjustably controlled by the AGC circuit in displaying an indication of the relative signal strength of the broadcast television signal received by the signal receiving antenna to provide more accurate measurements and indication of the relative signal strength of the received broadcast television signal indicated on the display. | 2020-12-24 |
20200404252 | METHOD AND DEVICE FOR MPM LIST GENERATION FOR MULTI-LINE INTRA PREDICTION - A method of and device for controlling multi-line intra prediction using a non-zero reference line. The method includes determining whether an intra prediction mode of a first neighboring block of a current block is an angular mode, determining whether an intra prediction mode of a second neighboring block of the current block is an angular mode, and generating a Most Probable Mode (MPM) list of the current block. | 2020-12-24 |
20200404253 | INTER MODES WITH WEIGHTING-AVERAGED PREDICTION, SYMMETRIC MVD OR AFFINE MOTION IN VIDEO CODING - A video coder is configured to form, in a symmetric motion vector difference mode, a List 0 (L0) base vector using a L0 Advanced Motion Vector Prediction (AMVP) candidate list and a List 1 (L1) base vector using a L1 AMVP candidate list; determine a refined L0 motion vector and a refined L1 motion vector by performing a decoder-side motion vector refinement process that refines the L0 base vector and the L1 base vector; and use the refined L0 motion vector and the refined L1 motion vector to determine a prediction block for a current block of a current picture of the video data. | 2020-12-24 |
20200404254 | METHOD AND APPARATUS FOR PROCESSING VIDEO SIGNAL ON BASIS OF HISTORY-BASED MOTION VECTOR PREDICTION - The disclosure discloses a method for processing video signals and an apparatus therefor. Specifically, the method of processing video signals based on history based motion vector prediction, comprising: configuring a merge candidate list based on a neighboring block to a current block; adding a history based merge candidate of the current block to the merge candidate list when a number of merge candidates included in the merge candidate list is smaller than a first predetermined number; adding a zero motion vector to the merge candidate list when the number of merge candidates included in the merge candidate list is smaller than a second predetermined number; obtaining a merge index indicating a merge candidate used for inter prediction of the current block in the merge candidate list; generating a prediction sample of the current block based on motion information of the merge candidate indicated by the merge index; and updating a history based merge candidate list based on the motion information. | 2020-12-24 |
20200404255 | INTERACTION BETWEEN IBC AND BIO - Devices, systems and methods for applying intra-block copy (IBC) in video coding are described. In general, methods for integrating IBC with existing motion compensation algorithms for video encoding and decoding are described. In a representative aspect, a method for video encoding using IBC includes determining whether a current block of the current picture is to be encoded using a motion compensation algorithm, and encoding, based on the determining, the current block by selectively applying an intra-block copy to the current block. In a representative aspect, another method for video encoding using IBC includes determining whether a current block of the current picture is to be encoded using an intra-block copy, and encoding, based on the determining, the current block by selectively applying a motion compensation algorithm to the current block. | 2020-12-24 |
20200404256 | INTER PREDICTION METHOD AND APPARATUS - An inter prediction method is provided, including: obtaining a first reference frame index and a first motion vector of a to-be-processed block ( | 2020-12-24 |
20200404257 | METHOD AND APPARATUS FOR HARMONIZING MULTIPLE SIGN BIT HIDING AND RESIDUAL SIGN PREDICTION - The present invention relates to an improved apparatus and method for harmonizing both Sign Bit Hiding (SBH) and Residual Sign Prediction (RSP) techniques in video coding. In order to improve coding efficiency, a list of transform coefficients, to which RSP is to be applied is prepared in advance of selecting a coefficient to which SBH is applied. Thereby, the RSP list can be populated in such a manner that the highest coding efficiency may be expected. Subsequently, one or more coefficients for applying SBH are selected so as not to be included in the list. | 2020-12-24 |
20200404258 | ENCODER, DECODER AND CORRESPONDING METHODS OF BOUNDARY STRENGTH DERIVATION OF DEBLOCKING FILTER - Apparatuses and methods for encoding and decoding a video are provided. A method includes determining whether at least one of two blocks of an image in a video is predicted with a combined inter-intra prediction (CIIP), wherein the two blocks include a first block (block Q) and a second block (block P). There is a boundary between the two blocks. The method further includes setting a boundary strength (Bs) for the boundary to a first value based on determining that at least one of the two blocks is predicted with the CIIP, and performing deblocking for the boundary between the first block and the second block based on the Bs to generate a modified reconstructed block for each of the first block and the second block. | 2020-12-24 |
20200404259 | EARLY INTRA CODING DECISION (PPH) - The present invention uses large intra blocks coding for uniform regions of the video by making early decision of intra block coding based on DCT and DC calculations. This has shown to increase the visual quality of uniform areas significantly, and by utilizing a possibility of parallel calculation, the extra processing cost for the early decision is insignificant. | 2020-12-24 |
20200404260 | INTERACTION BETWEEN IBC AND ATMVP - Devices, systems and methods for applying intra-block copy (IBC) in video coding are described. In general, methods for integrating IBC with existing motion compensation algorithms for video encoding and decoding are described. In a representative aspect, a method for video encoding using IBC includes determining whether a current block of the current picture is to be encoded using a motion compensation algorithm, and encoding, based on the determining, the current block by selectively applying an intra-block copy to the current block. In a representative aspect, another method for video encoding using IBC includes determining whether a current block of the current picture is to be encoded using an intra-block copy, and encoding, based on the determining, the current block by selectively applying a motion compensation algorithm to the current block. | 2020-12-24 |
20200404261 | METHODS AND APPARATUS FOR MULTIPLE LINE INTRA PREDICTION IN VIDEO COMPRESSION - There is includes a method and apparatus comprising computer code configured to cause a hardware processor or processors to perform intra prediction among a plurality of reference lines, to set a plurality of intra prediction modes for a zero reference line nearest to a current block of the intra prediction among non-zero reference lines, and to set one or more most probable modes for one of the non-zero reference lines. | 2020-12-24 |
20200404262 | IMAGE CODING METHOD, IMAGE DECODING METHOD, IMAGE CODING APPARATUS, AND IMAGE DECODING APPARATUS - An image coding method includes: generating a first flag indicating whether or not a motion vector predictor is to be selected from among one or more motion vector predictor candidates; generating a second flag indicating whether or not a motion vector predictor is to be selected from among the one or more motion vector predictor candidates in coding a current block to be coded in a predetermined coding mode, when the first flag indicates that a motion vector predictor is to be selected; and generating a coded signal in which the first flag and the second flag are included in header information, when the first flag indicates that a motion vector predictor is to be selected. | 2020-12-24 |
20200404263 | ADAPTIVE LOOP FILTER SIGNALLING - Example techniques are described for coding video data by obtaining a block of video data, obtaining an adaptive parameter set, determining a set of adaptive loop filter parameters for a plurality of filters for the block of video data based on the adaptive parameter set, wherein a plurality of adaptive loop parameters of the set of adaptive loop filter parameters are signaled using the same signaling parameter for each of the plurality of filters of the adaptive parameter set, and coding the block of video data using the set of adaptive loop filter parameters. The example techniques can be performed as part of an encoding or decoding process and/or by an encoder or a decoder. | 2020-12-24 |
20200404264 | METHOD AND IMAGE PROCESSING APPARATUS FOR VIDEO CODING - A method and an image processing apparatus for video coding are provided. The method is adapted to the image processing apparatus and includes following steps. A current coding unit is received, and filter selection is performed according to a size of the current coding unit. At least one selected filter is used to perform a filtering operation on a plurality of reference boundaries of the current coding unit to generate a plurality of filtering reference values. An interpolation operation is performed on the current coding unit according to the filtering reference values to generate a plurality of interpolated prediction values. | 2020-12-24 |
20200404265 | OFFSET DECODING DEVICE, OFFSET CODING DEVICE, IMAGE FILTERING DEVICE - An adaptive offset filter ( | 2020-12-24 |
20200404266 | SCALABLE VIDEO CODING USING DERIVATION OF SUBBLOCK SUBDIVISION FOR PREDICTION FROM BASE LAYER - Scalable video coding is rendered more efficient by deriving/selecting a subblock subdivision to be used for enhancement layer prediction, among a set of possible subblock subdivisions of an enhancement layer block by evaluating the spatial variation of the base layer coding parameters over the base layer signal. By this measure, less of the signalization overhead has to be spent on signaling this subblock subdivision within the enhancement layer data stream, if any. The subblock subdivision thus selected may be used in predictively coding/decoding the enhancement layer signal. | 2020-12-24 |
20200404267 | MOTION AND APPARATUS FOR MOTION FIELD STORAGE IN VIDEO CODING - The present disclosure provides method and apparatus for motion field storage in video coding. An exemplary method includes: determining whether a first uni-prediction motion vector for a first partition of a block and a second uni-prediction motion vector for a second partition of the block are from a same reference picture list; and in response to the first uni-prediction motion vector and the second uni-prediction motion vector being determined to be from the same reference picture list, storing, in a motion field of the block, one of the first uni-prediction motion vector and the second uni-prediction motion vector for a subblock located in a bi-predicted area of the block. | 2020-12-24 |
20200404268 | METHOD FOR SLICE, TILE AND BRICK SIGNALING - Methods and systems for decoding a picture. A method includes receiving a coded video stream including a picture partitioned into first sub-picture units, the first sub-picture units including one first sub-picture unit, and an additional first sub-picture unit including a first ordered second sub-picture unit, from among second sub-picture units of the additional first sub-picture unit, and a last ordered second sub-picture unit, from among the second sub-picture units of the additional first sub-picture unit. The method further including decoding the picture, the decoding including obtaining index values of the first ordered second sub-picture unit and the last ordered second sub-picture unit of the additional first sub-picture unit, without the coded video stream explicitly signaling any of the index values and a difference value between the index values of the first ordered second sub-picture unit and the last ordered second sub-picture unit to the at least one processor. | 2020-12-24 |
20200404269 | METHOD FOR REGION-WISE SCALABILITY WITH ADAPTIVE RESOLUTION CHANGE - Systems and methods for coding and decoding are provided. A method includes receiving a coded video stream including a picture partitioned into a plurality of sub-pictures, and further including adaptive resolution change (ARC) information that is signaled directly within a header of a sub-picture from among the plurality of sub-pictures, or that is signaled directly within a parameter set without any of the ARC information within the parameter set being referenced in any header or other parameter set, or that is provided within the parameter set and referenced in the header; and adaptively changing resolution of the sub-picture based on the ARC information. | 2020-12-24 |
20200404270 | FLEXIBLE SLICE, TILE AND BRICK PARTITIONING - A method, computer program, and computer system is provided for partitioning encoded video data. Data corresponding to a video frame is received, and the video frame data may be divided the video frame data into one or more subunits. These subunits may each have unique address values and be arranged in increasing order based on the unique address values. A left boundary and a top boundary associated with each of the subunits may include one or more of a picture boundary or a boundary of previously decoded subunit. | 2020-12-24 |
20200404271 | ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD - An encoder that encodes a current block in a picture includes circuitry and memory. Using the memory, the circuitry: splits the current block into a first sub block, a second sub block, and a third sub block in a first direction, the second sub block being located between the first sub block and the third sub block; prohibits splitting the second sub block into two partitions in the first direction; and encodes the first sub block, the second sub block, and the third sub block. | 2020-12-24 |
20200404272 | ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD - An encoder includes circuitry and memory. Using the memory, the circuitry: obtains a prediction image of a current block to be encoded included in the video by one of intra prediction and inter prediction; generates a difference between an image of the current block and the prediction image as a prediction error signal of the current block; selects a transform basis to be used to transform the prediction error signal from among a plurality of transform bases; generates a transform coefficient signal of the current block by transforming the prediction error signal using the transform basis; encodes the transform coefficient signal; and encodes an index value associated with the transform basis among a plurality of index values associated with the plurality of transform bases in a common correspondence relationship between when the prediction image is obtained by intra prediction and when the prediction image is obtained by inter prediction. | 2020-12-24 |
20200404273 | TRANSFORMS FOR LARGE VIDEO AND IMAGE BLOCKS - Improved transforms are used to encode and decode large video and image blocks. During encoding, a prediction residual block having a large size (e.g., larger than 32×32) is generated. The pixel values of the prediction residual block are transformed to produce transform coefficients. After determining that the transform coefficients exceed a threshold cardinality representative of a maximum transform block size (e.g., 32×32), a number of the transform coefficients are discarded such that a remaining number of transform coefficients does not exceed the threshold cardinality. A transform block is then generated using the remaining number. During decoding, after determining that the transform coefficients exceed the threshold cardinality, a number of new coefficients are added to the transform coefficients such that a total number of transform coefficients exceeds the threshold cardinality. The transform coefficients are then inverse transformed into a prediction residual block having a large size. | 2020-12-24 |
20200404274 | INTRA-PREDICTION MODE FOR SCREEN CONTENT CODING OF VIDEO CODING - An example video decoder is configured to decode a first value for a first syntax element of a current block of video data, the first value indicating that the current block is encoded using an intra-prediction mode; after decoding the first value, decode a second value for a second syntax element of the current block, the second value indicating that the current block is encoded using intra mapping mode. In the intra mapping mode, the video decoder is configured to generate a prediction block for the current block using the intra-prediction mode and decode a residual block. To decode the residual block, the video decoder is configured to determine predictors for quantized residual values of the residual block and map decoded mapping values to the quantized residual values using the predictors. The video decoder combines the prediction block with the residual block to decode the current block. | 2020-12-24 |
20200404275 | Methods and Apparatuses of Video Encoding or Decoding with Adaptive Quantization of Video Data - Video encoding or decoding methods and apparatuses for processing video data with color components comprise receiving video data, performing inter prediction and intra prediction, determining a luma Quantization Parameter (QP), deriving a chroma QP bias, calculating a chroma QP from the chroma QP bias and the luma QP, performing transform or inverse transform, and performing quantization or inverse quantization for the luma component utilizing the luma QP and for the chroma component utilizing the chroma QP. The chroma QP bias is derived from an intermediate QP index, and the intermediate QP index is computed by clipping a sum of the luma QP and a chroma QP offset parameter to a specified range. The bits allocated to code the luma and chroma components may be adaptively controlled by restricting the chroma QP bias. | 2020-12-24 |
20200404276 | CONTEXT MODELING FOR LOW-FREQUENCY NON-SEPARABLE TRANSFORMATION SIGNALING FOR VIDEO CODING - An example method includes determining a color component of a unit of video data; determining, based at least on the color component, a context for context-adaptive binary arithmetic coding (CABAC) a syntax element that specifies a value of a low-frequency non-separable transform (LFNST) index for the unit of video data; CABAC decoding, based on the determined context and via a syntax structure for the unit of video data, the syntax element that specifies the value of the LFNST index for the unit of video data; and inverse-transforming, based on a transform indicated by the value of the LFNST index, transform coefficients of the unit of video data. | 2020-12-24 |
20200404277 | Reducing Context Coded and Bypass Coded Bins to Improve Context Adaptive Binary Arithmetic Coding (CABAC) Throughput - Techniques for context-adaptive binary arithmetic coding (CABAC) coding with a reduced number of context coded and/or bypass coded bins are provided. Rather than using only truncated unary binarization for the syntax element representing the delta quantization parameter and context coding all of the resulting bins as in the prior art, a different binarization is used and only part of the resulting bins are context coded, thus reducing the worst case number of context coded bins for this syntax element. Further, binarization techniques for the syntax element representing the remaining actual value of a transform coefficient are provided that restrict the maximum codeword length of this syntax element to 32 bits or less, thus reducing the number of bypass coded bins for this syntax element over the prior art. | 2020-12-24 |
20200404278 | METHOD AND SYSTEM FOR PROCESSING LUMA AND CHROMA SIGNALS - The present disclosure provides systems and methods for processing video content. The method can include: receiving data representing a first block and a second block in a picture, the data comprising a plurality of chroma samples associated with the first block and a plurality of luma samples associated with the second block; determining an average value of the plurality of luma samples associated with the second block; determining a chroma scaling factor for the first block based on the average value; and processing the plurality of chroma samples associated with the first block using the chroma scaling factor. | 2020-12-24 |
20200404279 | SIGNALING FOR REFERENCE PICTURE RESAMPLING - A method, device, and non-transitory computer-readable medium for decoding an encoded video bitstream using at least one processor, including obtaining a coded picture from the encoded video bitstream; decoding the coded picture to generate a decoded picture; obtaining a first flag indicating whether reference picture resampling is enabled; obtaining a second flag indicating whether reference pictures have a constant reference picture size; obtaining a third flag indicating whether output pictures have a constant output picture size indicated in the encoded video bitstream; generating a reference picture by resampling the decoded picture to have the constant reference picture size, and storing the reference picture in a decoded picture buffer; and generating an output picture by resampling the decoded picture to have the constant output picture size, and outputting the output picture. | 2020-12-24 |
20200404280 | LAYERED RANDOM ACCESS WITH REFERENCE PICTURE RESAMPLING - A method of decoding an encoded video bitstream using at least one processor, including obtaining a coded base layer picture and a coded enhancement layer picture included in an LRA access unit; determining whether a random access occurs at the LRA access unit; based on the random access not occurring at the LRA access unit, generating a reconstructed base layer picture by reconstructing the coded base layer picture, and generating a reconstructed enhancement layer picture by reconstructing the coded enhancement layer picture using the reconstructed base layer picture and a previously reconstructed picture; based on the random access occurring at the LRA access unit, generating the reconstructed base layer picture by reconstructing the coded base layer picture, and generating the reconstructed enhancement layer picture by upsampling the reconstructed base layer picture; and outputting the reconstructed enhancement layer picture. | 2020-12-24 |
20200404281 | DC INTRA MODE PREDICTION IN VIDEO CODING - The disclosure describes examples for determining samples to use for DC intra mode prediction, such as where the samples are in a row or column that is not immediately above or immediately left of the current block. The samples may be aligned with the current block such that a last sample in the samples in a row above the current block is in same column as last column of the current block and such that a last sample in the samples in a column left of the current block is in the same row as the last row of the current block. | 2020-12-24 |
20200404282 | LIC SIGNALING METHODS - A method may include: receiving information regarding a current data block of an image; determining whether Local Illumination Compensation (LIC) is applicable for the current data block; based on determining that the LIC is applicable for the current data block, at least one of: infering an LIC flag for the current data block to be 1 or true corresponding to the LIC being enabled, or inheriting the current block's LIC flag from an LIC flag of a neighboring block; and based on the LIC flag for the current data block corresponding to the LIC being enabled, generate a prediction of at least one sub-block with a derived motion vector by applying LIC to the current data block using the inherited LIC flag. | 2020-12-24 |
20200404283 | INCREASING DECODING THROUGHPUT OF INTRA-CODED BLOCKS - A method of decoding video data includes determining, by one or more processors implemented in circuitry, a picture size of a picture. The picture size applies a picture size restriction to set a width of the picture and a height of the picture to each be a respective multiple of a maximum of 8 and a minimum coding unit size for the picture. The method further includes determining, by the one or more processors, a partition of the picture into a plurality of blocks and generating, by the one or more processors, a prediction block for a block of the plurality of blocks. The method further includes decoding, by the one or more processors, a residual block for the block and combining, by the one or more processors, the prediction block and the residual block to decode the block. | 2020-12-24 |
20200404284 | METHOD AND APPARATUS FOR MOTION VECTOR REFINEMENT - The present disclosure provides a method and an apparatus for motion vector refinement. An exemplary method includes: determining a plurality of first blocks associated with a first motion vector and a plurality of second blocks associated with a second motion vector; determining a sum of absolute transformed difference (SATD) between one of the plurality of first blocks and one of the plurality of second blocks; and refining the first motion vector and the second motion vector based on the determined SATDs. | 2020-12-24 |
20200404285 | UPDATE OF LOOK UP TABLE: FIFO, CONSTRAINED FIFO - A method of video processing is provided to include maintaining one or more tables, wherein each table includes one or more motion candidates and each motion candidate is associated with corresponding motion information; performing a conversion between a current block and a bitstream representation of a video including the current block by using motion information in a table; and updating, after performing of the conversion, one or more tables based on M sets of additional motion information associated with the current block, M being an integer. | 2020-12-24 |
20200404286 | BINARIZATION IN TRANSFORM SKIP RESIDUAL CODING - A video decoder can be configured to determine that a block of video data is encoded without transforming residual data for the block; determine a quantization parameter for the block of video data; based on the determined quantization parameter, determine a range for levels of quantized residual values of the block; divide the range into k intervals, wherein k is an integer value; determine a level for a quantized residual value of the block based on the k intervals by receiving information indicating the level for the quantized residual value is within a particular interval of the k intervals, receiving information indicating a difference value that represents a difference between a reference level value for the particular interval and the level for the quantized residual value of the block, and based on the reference level value and the difference value, determining the level for the quantized residual value. | 2020-12-24 |
20200404287 | CONVERSION OF DECODED BLOCK VECTOR FOR INTRA PICTURE BLOCK COMPENSATION - Methods, apparatuses, and non-transitory computer-readable mediums are provided. A coded video bitstream including a current picture is received. A determination is made as to whether a current block in a current coding tree unit (CTU) included in the current picture is coded in intra block copy (IBC) mode based on a flag included in the coded video bitstream. In response to the current block being determined as coded in IBC mode, a block vector that points to a first reference block of the current block is determined; an operation is performed on the block vector so that when the first reference block is not fully reconstructed or not within a valid search range of the current block, the block vector is modified to point to a second reference block that is in a fully reconstructed region and within the valid search range of the current block; and the current block is decoded based on the modified block vector | 2020-12-24 |
20200404288 | PRESERVING IMAGE QUALITY IN TEMPORALLY COMPRESSED VIDEO STREAMS - When a temporally compressed video stream is decoded and subsequently re-encoded, quality is typically lost. The quality loss may be mitigated using information about how the source video stream was encoded during the re-encoding process. According to some aspects of the disclosure, this mitigation of quality loss can be facilitated by decoders that output such information and encoders that receive such information. These decoders and encoders may be separate devices. The functionality of these decoders and encoders may also be combined in a single device, such as a transcoding device. An example of the information that may be used during re-encoding is whether each portion of the original stream was intra-coded or non-intra-coded. | 2020-12-24 |
20200404289 | UNIFIED INTRA BLOCK COPY AND INTER PREDICTION MODES - Innovations in unified intra block copy (“BC”) and inter prediction modes are presented. In some example implementations, bitstream syntax, semantics of syntax elements and many coding/decoding processes for inter prediction mode are reused or slightly modified to enable intra BC prediction for blocks of a frame. For example, to provide intra BC prediction for a current block of a current picture, a motion compensation process applies a motion vector that indicates a displacement within the current picture, with the current picture being used as a reference picture for the motion compensation process. With this unification of syntax, semantics and coding/decoding processes, various coding/decoding tools designed for inter prediction mode, such as advanced motion vector prediction, merge mode and skip mode, can also be applied when intra BC prediction is used, which simplifies implementation of intra BC prediction. | 2020-12-24 |
20200404290 | VIDEO ENCODING APPARATUS AND VIDEO ENCODING METHOD - A video encoding apparatus and a video encoding method are provided. The video encoding apparatus comprises an encoding circuit and a region of interest (ROI) determination circuit. The encoding circuit performs a video encoding operation on an original video frame to generate an encoded video frame. The encoding information is generated by the video encoding operation during an encoding process. The ROI determination circuit reuses the encoding information generated by the video encoding operation to identify one or more ROI objects according to the initial ROI and generates one or more dynamic ROIs for tracking the one or more ROI objects within a current video frame for any one of a plurality of sequential video frames following the original video frame. | 2020-12-24 |
20200404291 | VIDEO ENCODING APPARATUS AND VIDEO ENCODING METHOD - A video encoding apparatus and a video encoding method are provided. The video encoding apparatus comprises an encoding circuit and a region of interest (ROI) determination circuit. The encoding circuit performs a video encoding operation on an original video frame to generate an encoded video frame. The encoding information is generated by the video encoding operation during an encoding process. The ROI determination circuit reuses the encoding information generated by the video encoding operation to identify one or more ROI objects according to the initial ROI and generates one or more dynamic ROIs for tracking the one or more ROI objects within a current video frame for any one of a plurality of sequential video frames following the original video frame. | 2020-12-24 |
20200404292 | PARAMETERIZATION FOR FADING COMPENSATION - Techniques and tools for performing fading compensation in video processing applications are described. For example, during encoding, a video encoder performs fading compensation using fading parameters comprising a scaling parameter and a shifting parameter on one or more reference images. During decoding, a video decoder performs corresponding fading compensation on the one or more reference images. | 2020-12-24 |
20200404293 | EFFECTIVE PREDICTION USING PARTITION CODING - The way of predicting a current block by assigning constant partition values to the partitions of a bi-partitioning of a block is quite effective, especially in case of coding sample arrays such as depth/disparity maps where the content of these sample arrays is mostly composed of plateaus or simple connected regions of similar value separated from each other by steep edges. The transmission of such constant partition values would, however, still need a considerable amount of side information which should be avoided. This side information rate may be further reduced if mean values of values of neighboring samples associated or adjoining the respective partitions are used as predictors for the constant partition values. | 2020-12-24 |
20200404294 | ADAPTIVE PARTITION CODING - Although wedgelet-based partitioning seems to represent a better tradeoff between side information rate on the one hand and achievable variety in partitioning possibilities on the other hand, compared to contour partitioning, the ability to alleviate the constraints of the partitioning to the extent that the partitions have to be wedgelet partitions, enables applying relatively uncomplex statistical analysis onto overlaid spatially sampled texture information in order to derive a good predictor for the bi-segmentation in a depth/disparity map. Thus, in accordance with a first aspect it is exactly the increase of the freedom which alleviates the signaling overhead provided that co-located texture information in form of a picture is present. Another aspect pertains the possibility to save side information rate involved with signaling a respective coding mode supporting irregular partitioning. | 2020-12-24 |
20200404295 | Scalable Prediction Type Coding - A method for encoding a video sequence is provided that includes signaling in the compressed bit stream that a subset of a plurality of partitioning modes is used for inter-prediction of a portion of the video sequence, using only the subset of partitioning modes for prediction of the portion of the video sequence, and entropy encoding partitioning mode syntax elements corresponding to the portion of the video sequence, wherein at least one partitioning mode syntax element is binarized according to a pre-determined binarization corresponding to the subset of partitioning modes, wherein the pre-determined binarization differs from a pre-determined binarization for the least one partitioning mode syntax element that would be used if the plurality of partitioning modes is used for inter-prediction. | 2020-12-24 |
20200404296 | METHOD AND A DEVICE FOR PICTURE ENCODING AND DECODING - A decoding method is disclosed. The decoding method comprises:
| 2020-12-24 |
20200404297 | ADAPTIVE RESOLUTION CHANGE IN VIDEO PROCESSING - The present disclosure provides systems and methods for performing adaptive resolution change during video encoding and decoding. The methods include: comparing resolutions of a target picture and a first reference picture; in response to the target picture and the first reference picture having different resolutions, resampling the first reference picture to generate a second reference picture; and encoding or decoding the target picture using the second reference picture. | 2020-12-24 |
20200404298 | SYSTEM AND METHOD FOR VIDEO CODING - An encoder includes circuitry and memory coupled to the circuitry. The circuitry determines whether a first virtual pipeline decoding unit (VPDU) is split into smaller blocks and whether a second VPDU is split into smaller blocks. In response to a determination the first VPDU is not split into smaller blocks and a determination the second VPDU is split into smaller blocks, prediction residuals of chroma samples of a block are not scaled based on prediction residuals of luma samples. In response to a determination the first VPDU is split into smaller blocks and the determination the second VPDU is split into smaller blocks, prediction residuals of chroma samples of the block are scaled based on prediction residuals of luma samples. In response to the determination the first VPDU is not split into smaller blocks and a determination the second VPDU is not split into smaller block, prediction residuals of chroma samples of the block are scaled based on prediction residuals of luma samples. The block is encoded based on the prediction residuals of chroma samples. | 2020-12-24 |
20200404299 | Method and Apparatus for Image and Video Coding Using Hierarchical Sample Adaptive Band Offset - A method and apparatus for image coding using hierarchical sample adaptive band offset. The method includes decoding a signal of a portion of an image, determining a band offset type and offset of a portion of the image, utilizing the band offset type and offset to determine a sub-band, and reconstructing a pixel value according to the determined offset value. | 2020-12-24 |
20200404300 | DECODED PICTURE BUFFER INDEXING - A video decoder is configured to remove pictures from a decoded picture buffer based on the value of an explicitly coded syntax element. A video decoder may be configured to decode a syntax element indicating a picture to remove from a decoded picture buffer, and remove the first picture from the DPB. The video decoder may then decode a current picture, and store the decoded current picture in the DPB. | 2020-12-24 |
20200404301 | RULES FOR INTRA-PICTURE PREDICTION MODES WHEN WAVEFRONT PARALLEL PROCESSING IS ENABLED - Various innovations facilitate the use of intra-picture prediction modes such as palette prediction mode, intra block copy mode, intra line copy mode and intra string copy mode by an encoder or decoder when wavefront parallel processing (“WPP”) is enabled. For example, for a palette coding/decoding mode, an encoder or decoder predicts a palette for an initial unit in a current WPP row of a picture using previous palette data from a previous unit in a previous WPP row of the picture. Or, as another example, for an intra copy mode (e.g., intra block copy mode, intra string copy mode, intra line copy mode), an encoder enforces one or more constraints attributable to the WPP, or a decoder receives and decodes encoded data that satisfies one or more constraints attributable to WPP. | 2020-12-24 |