Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


05th week of 2021 patent applcation highlights part 74
Patent application numberTitlePublished
20210037178SYSTEMS AND METHODS FOR ADJUSTING FOCUS BASED ON FOCUS TARGET INFORMATION - A system, method, and computer program product are provided for generating a focus sweep to produce a focus stack. In use, an image is sampled as image data. Next, a first focus region is identified and a second focus region is identified. Next, first focus target information corresponding to the first focus region is determined and second focus target information corresponding to the second focus region is determined. Further, a focus is adjusted, based on the first focus target information and at least one first image is captured based on the first focus target information. Additionally, the focus is adjusted, based on the second focus target information and at least one second image is captured based on the second focus target information. Lastly, the at least one first image and the at least one second image are saved to an image stack. Additional systems, methods, and computer program products are also presented.2021-02-04
20210037179SCALED PERSPECTIVE ZOOM ON RESOURCE CONSTRAINED DEVICES - A dolly zoom effect can be applied to one or more images captured via a resource-constrained device (e.g., a mobile smartphone) by manipulating the size of a target feature while the background in the one or more images changes due to physical movement of the resource-constrained device. The target feature can be detected using facial recognition or shape detection techniques. The target feature can be resized before the size is manipulated as the background changes (e.g., changes perspective).2021-02-04
20210037180PHOTOGRAPHY DEVICE STATUS FEEDBACK METHOD, PHOTOGRAPHY DEVICE, AND PORTABLE ELECTRONIC DEVICE - A status feedback method includes obtaining a feedback instruction for instructing to feed back a status of a photography device and outputting status information of the photography device by sound according to the feedback instruction.2021-02-04
20210037181IMAGE CAPTURE CONTROL APPARATUS AND IMAGE CAPTURE CONTROL METHOD - An image capture control apparatus includes reception unit configured to receive an instruction for ending recording of a moving image; and control unit configured to, when the instruction for ending is issued, perform control, in a case of a first shooting mode in which an image is displayed in a state being visible from a subject, to display an item for deleting a portion at a beginning or at an end of the moving image, in response to the instruction for ending, and to record the moving image to a recording medium in a state where the portion of the moving image has been deleted, and perform control, in a case of a second shooting mode, to not display the item, even when the instruction for ending has been given.2021-02-04
20210037182ELECTRONIC DEVICE FOR COMPRESSING IMAGE BY USING COMPRESSION ATTRIBUTE GENERATED IN IMAGE ACQUISITION PROCEDURE USING IMAGE SENSOR, AND OPERATING METHOD THEREOF - An electronic device according to various embodiments may comprise: a communication module; an image sensor; a control circuit electrically connected to the image sensor, acquiring a first image and a second image in order by using the image sensor, compressing the first image according to a first compression scheme by using attribute information generated in relation to an operation of compressing an image acquired before acquisition of the first image, compressing the second image according to the first compression scheme by using first attribute information generated in relation to an operation of compressing the first image, and generating second attribute information in relation to an operation of compressing the second image, and a processor electrically connected to the control circuit and the communication module, wherein the processor is configured to: acquire, from the control circuit, the first image compressed by the first compression scheme and the second image compressed by the first compression scheme, decompress the first image compressed by the first compression scheme and the second image compressed by the first compression scheme, compress the decompressed first image by a second compression scheme by using the first attribute information, compress the decompressed second image by the second compression scheme by using the second attribute information, and transmit the first image compressed by the second compression scheme and the second damage compressed by the second compression scheme, to an external device by using the communication module. Various other embodiments are possible.2021-02-04
20210037183MONOCENTRIC MULTISCALE (MMS) CAMERA HAVING ENHANCED FIELD OF VIEW - Disclosed are various arrangements of monocentric multiscale imaging systems and cameras that advantageously exhibit an enhanced field of view (FoV). Illustrative examples of such systems include a 360° ring FoV MMS lens that advantageously captures approximately 500-mega-pixel image from a circular ring area. Additionally, by varying microcamera imaging channel configurations, we disclose a multi-focal design that advantageously can range from 15 mm to 40 mm providing coverage of a scene with widely different imaging magnifications. Finally, additional illustrative configurations combine multiple MMS systems such that an arbitrary solid angle in 4π space is covered.2021-02-04
20210037184WIDE AREA IMAGING SYSTEM AND METHOD - The present invention provides a new and useful paradigm in wide area imaging, in which wide area imaging is provided by a step/dwell/image capture process and system to capture images and produce from the captured images a wide area image. The image capture is by a sensor that has a predetermined image field and provides image capture at a predetermined frame capture rate, and by a motorized step and dwell sequence of the sensor, where image capture is during a dwell, and the step and dwell sequence of the sensor is synchronized with the image capture rate of the sensor.2021-02-04
20210037185INFORMATION PROCESSING APPARATUS THAT PERFORMS ARITHMETIC PROCESSING OF NEURAL NETWORK, AND IMAGE PICKUP APPARATUS, CONTROL METHOD, AND STORAGE MEDIUM - An information processing apparatus includes a processor that performs arithmetic processing of a neural network, and a controller that is capable of setting, in the processor, a first inference parameter to be applied to processing for shooting control for shooting an image and a second inference parameter to be applied to processing for processing control of the image. The controller switches the first inference parameter having been set in the processor to the second inference parameter in response to settlement of a focus target.2021-02-04
20210037186IMAGING DEVICE, SOLID-STATE IMAGING ELEMENT, CAMERA MODULE, DRIVE CONTROL UNIT, AND IMAGING METHOD - The present disclosure relates to an imaging device, a solid-state imaging element, a camera module, a drive control unit, and an imaging method for enabling reliable correction of an influence of a motion on an image. A state determination unit determines a state of a motion of an imaging unit that performs imaging to acquire an image via an optical system that collects light, and an exposure control unit performs at least control for an exposure time of the imaging unit according to a determination result by the state determination unit. Then, relative driving for the optical system or the imaging unit is performed to optically correct a blur appearing in the image according to an exposure period of one frame by the exposure time, and driving for resetting a relative positional relationship between the optical system and the imaging unit, the relative positional relationship being caused during the exposure period, is performed according to a non-exposure period in which exposure is not performed between the frames. The present technology can be applied to, for example, a stacked CMOS image sensor.2021-02-04
20210037187IMAGE CAPTURE DEVICE WITH EXTENDED DEPTH OF FIELD - An image capture device having a first integrated sensor lens assembly (ISLA), a second ISLA, and an image processor is disclosed. The first and second ISLAs may each include a respective optical element that have different depths of field. The first and second ISLAs may each include a respective image sensor configured to capture respective images. The image processor may be electrically coupled to the first ISLA and the second ISLA. The image processor may be configured to obtain a focused image based on a first image and a second image. The focused image may have an extended depth of field. The extended depth of field may be based on the depth of field of each respective optical element.2021-02-04
20210037188DISPLAY CONTROL APPARATUS AND METHOD FOR CONTROLLING THE SAME - If an on/off setting of an on-screen display (OSD) on a liquid crystal display (LCD) panel (“LCD OSD” setting) is changed from on to off on an “LCD OSD” setting screen of a menu screen, a system control unit stores a flag A indicating that the “LCD OSD” setting is changed to off into a system memory. If a menu end operation is made and the flag A is stored, the system control unit changes the “LCD OSD” setting to off and ends the menu screen. The setting of the information display on the LCD panel can thereby be made with high operability.2021-02-04
20210037189ELECTRONIC DEVICE FOR RECEIVING LINE OF SIGHT INPUT, METHOD OF CONTROLLING ELECTRONIC DEVICE, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - An electronic device comprising: a control unit configured to perform control such that, in case a first option group is displayed in a state incapable of receiving option selection by a line of sight input, the first option group is moved, and a second option group that includes a plurality of options including one or more options included in the first option group and one or more options not included in the first option group is displayed, in accordance with an operation on an operation member for selecting any one option of the first option group, and in case a third option group is displayed in a state capable of receiving the option selection by the line of sight input, any one option of the third option group is selected without moving the third option group, in accordance with the line of sight input.2021-02-04
20210037190IMAGE CAPTURING APPARATUS, METHOD OF CONTROLLING THE SAME, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM - An apparatus having a focusing unit for focusing in relation to an area within an angle of view, detects, from a captured image, an object area in which a specific object is present, calculates a depth width for the object area based on depth information indicating a defocus distance to an object for each portion obtained by dividing the image, and controls a display so as to, if the depth width is greater than or equal to a predetermined value, superimposedly display, on the image, an object detection frame for indicating the object area and an AF frame for indicating an AF location that the focusing unit uses, and, if the depth width is less than a predetermined value, superimposedly display the object detection frame, and not superimposedly display the AF frame.2021-02-04
20210037191ELECTRONIC DEVICE FOR SELECTIVELY COMPRESSING IMAGE DATA ACCORDING TO READ OUT SPEED OF IMAGE SENSOR, AND METHOD FOR OPERATING SAME - Embodiments disclosed in the present document relate to a method and an apparatus for synthesizing images. An electronic device according to various embodiments of the present invention comprises: an image sensor; an image processing processor; and a control circuit which is electrically connected to the image sensor through a first designated interface and to the image processing processor through a second designated interface. The control circuit may be configured to: when the read out speed of the image sensor is set to a first designated speed, receive, through the first designated interface, first image data that has been obtained by using the image sensor and has not been compressed by the image sensor; transfer the first image data to the image processing processor through the second designated interface; when the read out speed of the image sensor is set to a second designated speed, receive, through the first designated interface, second image data being obtained by using the image sensor and being compressed by the image sensor; decompress the compressed second image data by means of the control circuit; and transfer the decompressed second image data to the image processing processor through the second designated interface.2021-02-04
20210037192VIDEO DISPLAY APPARATUS, VIDEO DISPLAY METHOD, AND VIDEO SIGNAL PROCESSING APPARATUS - A video display apparatus includes: an inputter that receives an input of divided video signals representing divided images obtained by dividing an output image and acquires signal information for each of the divided video signals; a video signal processor that applies processing to the divided video signals and generates an output video signal representing an image obtained by combining the divided images; a controller that acquires the signal information from the inputter and supplies a control signal relating to the processing to the video signal processor; and a display that displays the image represented by the output video signal, and the number of kinds of signal information acquired by the controller with respect to some divided video signals among the divided video signals is greater than the number of kinds of signal information acquired by the controller with respect to the other divided video signals.2021-02-04
20210037193METHOD AND APPARATUS FOR GENERATING THREE-DIMENSIONAL PARTICLE EFFECT, AND ELECTRONIC DEVICE - Disclosed is a method and apparatus for generating a three-dimensional particle effect, an electronic device and a computer readable storage medium. The method comprises: receiving a three-dimensional particle resource package; parsing a configuration file of the three-dimensional particle resource package; displaying parameter configuration items corresponding to the configuration file on the display apparatus, the parameter configuration items at least comprises a three-dimensional particle system parameter configuration item, a three-dimensional particle emitter parameter configuration item, and a three-dimensional particle affector parameter configuration item; receiving a parameter configuration command to perform a parameter configuration for the above parameter configuration items; generating the three-dimensional particle effect according to parameters configured in the parameter configuration items.2021-02-04
20210037194CONVERSION BETWEEN ASPECT RATIOS IN CAMERA - A camera system captures an image in a source aspect ratio and applies a transformation to the input image to scale and warp the input image to generate an output image having a target aspect ratio different than the source aspect ratio. The output image has the same field of view as the input image, maintains image resolution, and limits distortion to levels that do not substantially affect the viewing experience. In one embodiment, the output image is non-linearly warped relative to the input image such that a distortion in the output image relative to the input image is greater in a corner region of the output image than a center region of the output image.2021-02-04
20210037195GENERATING CUSTOMIZED, PERSONALIZED REACTIONS TO SOCIAL MEDIA CONTENT - The present disclosure is directed toward systems, computer-readable media, and methods for generating a personalized selfie reaction-element in connection social media content. For example, the systems and methods described herein generate a personalized selfie reaction-element including a multi-media item and one or more elements and/or enhancements. The systems and method described herein can then provide the personalized selfie reaction-element for use in connection with various types of social media content including communication threads, ephemeral content, posts, and direct messages.2021-02-04
20210037196Weld Inspection System and Method - A weld inspection system is adapted to inspect a weld of a work product via thermographic technology. The weld inspection system includes a heat source assembly, a thermal imaging camera, and a controller. The heat source assembly is adapted to sequentially direct a plurality of heat pulses upon the work product from varying perspectives and within a pre-determined time period. Thea thermal imaging camera is configured to thermally image the weld over the pre-determined time period and collect thermal imaging data of heat dissipation from the work product. The thermal imaging data is associated with the weld and the plurality of heat pulses. The controller is configured to control the heat source assembly and the thermal imaging camera. The controller includes a processor configured to receive and transform the thermal imaging data into a binary image for evaluation of the weld.2021-02-04
20210037197MOBILE GAS AND CHEMICAL IMAGING CAMERA - In one embodiment, an infrared (IR) imaging system for determining a concentration of a target species in an object is disclosed. The imaging system can include an optical system including an optical focal plane array (FPA) unit. The optical system can have components defining at least two optical channels thereof, said at least two optical channels being spatially and spectrally different from one another. Each of the at least two optical channels can be positioned to transfer IR radiation incident on the optical system towards the optical FPA. The system can include a processing unit containing a processor that can be configured to acquire multispectral optical data representing said target species from the IR radiation received at the optical FPA. Said optical system and said processing unit can be contained together in a data acquisition and processing module configured to be worn or carried by a person.2021-02-04
20210037198IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND INFORMATION SYSTEM - An image processing method includes: an image pickup step of picking up an RGB image of a target object to be picked up, and picking up a spectroscopic image of the target object in a predetermined wavelength range and thus acquiring spectroscopic information peculiar to the target object in the wavelength range; and a display step of displaying a complemented image complemented by superimposing the spectroscopic information on the RGB image.2021-02-04
20210037199METHOD FOR DESIGNING A FREEFORM SURFACE REFLECTIVE IMAGING SYSTEM - The present invention relates to a method for designing a freeform surface reflective imaging system, comprising: selecting an initial system, wherein an FOV of the initial system is X2021-02-04
20210037200SOLID-STATE IMAGING ELEMENT AND ELECTRONIC DEVICE - A solid-state imaging element of the present disclosure a pixel. The pixel includes a charge accumulation unit that accumulates a charge photoelectrically converted by a photoelectric conversion unit, a reset transistor that selectively applies a reset voltage to the charge accumulation unit, an amplification transistor having a gate electrode electrically connected to the charge accumulation unit, and a selection transistor connected in series to the amplification transistor. Additionally, the solid-state imaging element includes a first wiring electrically connecting the charge accumulation unit and the gate electrode of the amplification transistor, a second wiring electrically connected to a common connection node of the amplification transistor and the selection transistor and formed along the first wiring, and a third wiring electrically connecting the amplification transistor and the selection transistor.2021-02-04
20210037201SCALABLE READOUT INTEGRATED CIRCUIT ARCHITECTURE WITH PER-PIXEL AUTOMATIC PROGRAMMABLE GAIN FOR HIGH DYNAMIC RANGE IMAGING - An imager device includes a pixel sensor configured to receive and convert incident radiation into a pixel signal and a readout circuit configured to receive the pixel signal from the pixel sensor, generate a received signal strength indicator (RSSI) value based on the pixel signal, and generate a digital signal based on the RSSI value and the pixel signal.2021-02-04
20210037202PIXEL COLLECTION CIRCUIT AND OPTICAL FLOW SENSOR - The present disclosure provides a pixel collection circuit and an optical flow sensor including the pixel collection circuit. The pixel collection circuit at least includes a light intensity detector, a first state storage module, a second state storage module, a light intensity signal collection and storage module, and a time information storage module.2021-02-04
20210037203PHOTOELECTRIC CONVERSION APPARATUS, PHOTOELECTRIC CONVERSION SYSTEM, AND MOVING BODY - A photoelectric conversion apparatus includes one or more first avalanche diodes, a first processing circuit configured to be connected to the first avalanche diode(s), one or more second avalanche diodes, and a second processing circuit configured to be connected to the second avalanche diode(s), wherein the first avalanche diode(s) is/are configured to be connected to the second processing circuit by a selection circuit.2021-02-04
20210037204IMAGE SENSORS, IMAGE PROCESSING SYSTEMS, AND OPERATING METHODS THEREOF - An image sensor includes a pixel array including a plurality of pixels arranged in a matrix, the plurality of pixels configured to generate separate, respective pixel signals, a row driver configured to selectively read pixel signals generated by pixels of a plurality of rows of the pixel array, a binning circuitry configured to selectively sum or pass through the read pixel signals to generate binned pixel signals, a column array including a plurality of analog-to-digital converters (ADCs) configured to perform an analog-to-digital conversion on the binned pixel signals, and a mode selecting circuitry configured to control the row driver and the binning circuitry to change an operation mode of the image sensor based on a mode signal received at the image sensor to change an operation mode of the image sensor.2021-02-04
20210037205SOLID-STATE IMAGE SENSING DEVICE - A solid-state image sensing device is of a global-shutter type and includes a vertical driving unit and an analog-to-digital (AD) converter. The vertical driving unit performs a shutter operation during a time period from when the AD converter starts an AD conversion to when the AD converter ends the AD conversion. The AD converter does not output a digital signal during a time period from when the vertical driving unit starts the shutter operation to when the vertical driving unit ends the shutter operation.2021-02-04
20210037206METHOD AND APPARATUS FOR REDUCING INTERFERENCE FROM CONTENT PLAY IN MULTI-DEVICE ENVIRONMENT - Systems and methods for reducing interference between content play and video recording among multiple devices located proximate to each other. Microphones of devices recording video are muted, or content play is interrupted, according to the actions of a majority of nearby devices. For example, if most devices are recording video, content play may be interrupted to prevent the video from unintentionally recording unwanted sounds from play of the content. Conversely, if most devices are not recording video, only those devices which are may have their microphones muted. Actions to reduce interference may be taken according to the current behavior of a majority of proximate devices.2021-02-04
20210037207Adaptive Resolution In Software Applications Based On Dynamic Eye Tracking - Methods and systems are described for determining an image resource allocation for displaying content within a display area. An image or data capture device associated with a display device may capture an image of a space associated with the user or capture data related to other objects in the space. The viewing distance between the user and the display area (e.g., the display device) may be monitored and processed to determine and/or adjust the image resource allocation for content displayed within the display area. User movement, including eye movement, may also be monitored and processed to determine and/or adjust the image resource allocation for content displayed within the display area.2021-02-04
20210037208COMPOSITING VIDEO SIGNALS AND STRIPPING COMPOSITE VIDEO SIGNAL - A method of compositing video signals includes: obtaining at least two video signals to be composited; determining a multiplier point mode corresponding to each of the at least two video signals; performing a byte size adjustment on the at least two video signals respectively according to the multiplier point mode corresponding to each of the at least two video signals; performing a composite modulation on bytes corresponding to the at least two video signals after the byte size adjustment, and outputting composite-modulated data through a target signal interface of an analog-to-digital conversion chip.2021-02-04
20210037209VIDEO CALL MEDIATING APPARATUS, METHOD AND COMPUTER READABLE RECORDING MEDIUM THEREOF - A video call mediating apparatus, method and computer readable recording medium thereof are disclosed. The video call mediating method according to an embodiment of the present disclosure includes establishing a video call session between a first terminal and a second terminal; obtaining user information about the first terminal and user information about the second terminal; displaying a video obtained by the first terminal, a video obtained by the second terminal, and a profile of a user of the second terminal, on first to third areas of a display area of the first terminal, respectively; receiving a selection input regarding the profile of the user of the second terminal from a user of the first terminal; and in response to receiving the selection input, displaying contents uploaded by the user of the second terminal on a fourth area on the display area of the first terminal.2021-02-04
20210037210Pairing Devices in Conference Using Ultrasonic Beacon and Subsequent Control Thereof - A videoconferencing system has a videoconferencing unit that use portable devices as peripherals for the system. The portable devices obtain near-end audio and send the audio to the videoconferencing unit via a wireless connection. In turn, the videoconferencing unit sends the near-end audio from the loudest portable device along with near-end video to the far-end. The portable devices can control the videoconferencing unit and can initially establish the videoconference by connecting with the far-end and then transferring operations to the videoconferencing unit. To deal with acoustic coupling between the unit's loudspeaker and the portable device's microphone, the unit uses an echo canceller that is compensated for differences in the clocks used in the A/D and D/A converters of the loudspeaker and microphone.2021-02-04
20210037211VIDEO CONFERENCE COMMUNICATION - A method of video conference communication between N terminals of N users is described. The method can be implemented by one of the N terminals, and can include receiving, from a processing device, N audiovisual streams respectively transmitted by the N terminals, N items of voice activity information of the N users, respectively associated with N corresponding user identifiers, each of the N items of information assuming a first or a second value respectively representing the presence or absence of voice activity. The method can also include determining, for at least one of the N users, whether or not the information is at the same value from a certain time. The method can also include requesting, from the device upon determining that the information is at the first value from this time, a stream associated with the user as the main stream to be displayed, receiving, and displaying the main stream.2021-02-04
20210037212SPECIAL EFFECTS COMMUNICATION TECHNIQUES - A special effects communication system may include a camera system at a first location that generates a first dataset based on one or more images or impressions of a first user at the first location. The special effects communication system may also include a control system communicatively coupled to the camera system and a special effects system. The control system may receive a second dataset corresponding to a second user present at a second location or to one or more users at one or more locations while the first user is present at the first location. The control system may also provide one or more control signals to cause the special effects system at the first location to generate special effects to present a visual and/or auditory likeness of the second user using special effects material and/or mediums.2021-02-04
20210037213IMAGE-BASED DETERMINATION APPARATUS AND IMAGE-BASED DETERMINATION SYSTEM - An image-based determination apparatus includes circuitry configured to receive at least one of first image data, output from image capture apparatuses, and second image data, output from another device, the first image data and the second image data to be subjected to an image-based determination operation; play and display the at least one of the received first image data and the second image data, on a display; designate a detection area, to be subjected to the image-based determination operation, in a first display area being displayed on the display, the first display area displaying the at least one of the first image data and the second image data; and perform the image-based determination operation on an image at the detection area in a second display area being displayed on the display, the second display area displaying the at least one of the first image data and the second image data.2021-02-04
20210037214OUTPUT CONTROL APPARATUS, DISPLAY TERMINAL, REMOTE CONTROL SYSTEM, CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM - An output control apparatus is communicable with a communication apparatus through a communication network. The communication apparatus includes a first image capturing device configured to capture a subject at a remote site to acquire a first image and a second image capturing device configure to capture a part of the subject to acquire a second image. The output control apparatus includes circuitry to: receive the first image transmitted from the communication apparatus; output the received first image so as to be displayed on a display; receive, from the communication apparatus, the second image acquired by capturing a part of the subject corresponding to a display position of the first image displayed on the display; output the received second image so as to be displayed on the display; and control the display to display the first image and the second image that are output.2021-02-04
20210037215DOOR IMAGE DISPLAY SYSTEM AND MONITOR - A plurality of cameras are provided to take images of a plurality of doors of a train. A monitor displays a plurality of door camera images taken by the plurality of cameras and buttons configured to receive an operation to instruct the opening or closing of the door shown in the door camera images. The controller outputs a communication signal including an instruction to open or close the door that corresponds to the operated button on the monitor in response to the operation of the button.2021-02-04
20210037216Systems and Methods for Authenticating and Presenting Video Evidence - A method for automatically authenticating unknown video data based on known video data stored at a client server is provided, wherein, unknown and known video data each are made up of segments and include metadata, a hash message digest, and a serial code. The method involves selecting a first segment of the unknown video and locating the serial code within the first segment of the unknown video data. The serial code is used to locate a corresponding first segment in the known video data. The first segment may include a known hash message digest. A new hash message digest for the first segment of the unknown video data is generated and compared with the known hash message digest. If they match, the segment of unknown video data is authentic.2021-02-04
20210037217VEHICLE DISPLAY DEVICE, VEHICLE CONTROL SYSTEM, VEHICLE CONTROL METHOD, AND STORAGE MEDIUM - A vehicle display device is provided, the vehicle display device including a communication section configured to receive an image of a remote driver remotely operating a vehicle, an external vehicle display section provided to an outer shell of the vehicle and configured to display an image externally to the vehicle, and a display control section configured to cause the image of the remote driver received by the communication section to be displayed on the external vehicle display section.2021-02-04
20210037218IMAGE CAPTURING DEVICE, IMAGE PROCESSING DEVICE AND DISPLAY DEVICE FOR SETTING DIFFERENT EXPOSURE CONDITIONS - An image capturing device includes: an image capturing element having a first image capturing region that captures an image of a photographic subject and outputs a first signal, and a second image capturing region that captures an image of the photographic subject and outputs a second signal; a setting unit that sets an image capture condition for the first image capturing region to an image capture condition that is different from an image capture condition for the second image capturing region; a correction unit that performs correction upon the second signal, for employment in interpolation of the first signal; and a generation unit that generates an image of the photographic subject that has been captured by the first image capturing region by employing a signal generated by interpolating the first signal according to the second signal as corrected by the correction unit.2021-02-04
20210037219METASURFACES AND SYSTEMS FOR FULL-COLOR IMAGING AND METHODS OF IMAGING - Metasurfaces and systems including metasurfaces for imaging and methods of imaging are described. Such metasurfaces may be formed on a substrate from a plurality of posts. The metasurfaces are configured to be optically active over a wavelength range and in certain embodiments are configured to form lenses. In particular, the metasufaces described herein may be configured to focus light passed through the metasurface in an extended depth of focus. Accordingly, the disclosed metasurfaces are generally suitable for generating color without or with minimal chromatic aberrations, for example, in conjunction with computational reconstruction.2021-02-04
20210037220PROJECTION DEVICE AND CONTROL METHOD THEREOF - A projection device adapted to be placed on a vehicle and a control method of the projection device are provided. The projection device includes a storage device storing image data of at least one projection image, a projection module, a distance detection device measuring a projection distance from the distance detection device to a projection plane, and a control device electrically connected to the storage device, the projection module, and the distance detection device. The control device enables the projection module to project the projection image and controls brightness or size of the projection image according to the corresponding projection distance. The projection device is activated by opening a door of the vehicle, and power of the projection device comes from the vehicle. The projection image can be adjusted according to the projection distance to maintain stable brightness and the size of the projection image.2021-02-04
20210037221CONTROL METHOD FOR IMAGE PROJECTION SYSTEM AND IMAGE PROJECTION SYSTEM - A control method for an image projection system including a plurality of projectors includes a determining step of determining, based on a stack number indicating the number of the projectors, projected images of which are superimposed one another, colors of pattern images projected by the respective projectors equivalent to the stack number and a projecting step of causing the respective projectors equivalent to the stack number to project the pattern images having the colors determined in the determining step. In the determining step, when the pattern images equivalent to the stack number are superimposed, the colors of the pattern images projected by the respective projectors equivalent to the stack number are determined such that the superimposed pattern images equivalent to the stack number have a specific color.2021-02-04
20210037222ILLUMINATION SYSTEM AND PROJECTION APPARATUS - An illumination system includes an excitation light source module, a light splitting and combining module, a filter module, and a wavelength conversion module. The excitation light source module provides an excitation beam. The light splitting and combining module is disposed on a transmission path of the excitation beam. The excitation beam includes a first and a second excitation beam which are different from each other in polarization state or wavelength range. The filter module is disposed on the transmission path of the excitation beam. The filter module includes a light passing area configured to allow the excitation beam to pass there-through and a light filtering area. The wavelength conversion module is disposed on the transmission path of the excitation beam reflected by the light filtering area and configured to convert the reflected excitation beam into a conversion beam. A projection apparatus including the above illumination system is also provided.2021-02-04
20210037223COLOR NIGHT VISION CAMERAS, SYSTEMS, AND METHODS THEREOF - Disclosed are improved methods, systems and devices for color night vision that reduce the number of intensifiers and/or decrease noise. In some embodiments, color night vision is provided in system in which multiple spectral bands are maintained, filtered separately, and then recombined in a unique three-lens-filtering setup. An illustrative four-camera night vision system is unique in that its first three cameras separately filter different bands using a subtractive Cyan, Magenta and Yellow (CMY) color filtering-process, while its fourth camera is used to sense either additional IR illuminators or a luminance channel to increase brightness. In some embodiments, the color night vision is implemented to distinguish details of an image in low light. The unique application of the three-lens subtractive CMY filtering allows for better photon scavenging and preservation of important color information.2021-02-04
20210037224AUGMENTED REALITY GUIDANCE FOR SPINAL PROCEDURES USING STEREOSCOPIC OPTICAL SEE-THROUGH HEAD MOUNTED DISPLAYS WITH REAL TIME VISUALIZATION OF TRACKED INSTRUMENTS - Embodiments disclose a real-time surgery method and apparatus for displaying a stereoscopic augmented view of a patient from a static or dynamic viewpoint of the surgeon, which employs real-time three-dimensional surface reconstruction for preoperative and intraoperative image registration. Stereoscopic cameras provide real-time images of the scene including the patient. A stereoscopic video display is used by the surgeon, who sees a graphical representation of the preoperative or intraoperative images blended with the video images in a stereoscopic manner through a see-through display.2021-02-04
20210037225METHOD OF MODIFYING AN IMAGE ON A COMPUTATIONAL DEVICE - A method of modifying an image on a computational device and a system for implementing the method is disclosed. The method comprising the steps of: providing image data representative of at least a portion of a three-dimensional scene, the scene being visible to a human observer from a viewing point when fixating on a visual fixation point within the scene; displaying an image by rendering the image data on a display device; capturing user input by user input capturing means; modifying the image by: computationally isolating a fixation region within the image, the fixation region being defined by a subset of image data representing an image object within the image, wherein the image object is associated with the visual fixation point; spatially reconstructing the subset of image data to computationally expand the fixation region; spatially reconstructing remaining image data relative to the subset of image data to computationally compress a peripheral region of the image relative to the fixation region in a progressive fashion as a function of a distance from the fixation region, wherein the modification of the image is modulated by the user input such that a modified image is produced which synthetically emulates how the human observer would perceive the three-dimensional scene.2021-02-04
20210037226IMAGE DATA ENCODING/DECODING METHOD AND APPARATUS - Disclosed are methods and apparatuses for decoding an image. A method includes receiving a bitstream obtained by encoding the image; dividing a first coding block into a plurality of second coding blocks; generating a prediction block of a second coding block based on syntax information obtained from the bitstream; and reconstructing the second coding block based on the prediction block and a residual block of the second coding block, the residual block being obtained by performing a dequantization and an inverse-transform on quantized transform coefficients from the bitstream. The first coding block has a recursive division structure. The first coding block is divided based on at least one of a quad tree division, a binary tree division or a triple tree division.2021-02-04
20210037227Trajectory-Based Viewport Prediction for 360-Degree Videos - In implementations of trajectory-based viewport prediction for 360-degree videos, a video system obtains trajectories of angles of users who have previously viewed a 360-degree video. The angles are used to determine viewports of the 360-degree video, and may include trajectories for a yaw angle, a pitch angle, and a roll angle of a user recorded as the user views the 360-degree video. The video system clusters the trajectories of angles into trajectory clusters, and for each trajectory cluster determines a trend trajectory. When a new user views the 360-degree video, the video system compares trajectories of angles of the new user to the trend trajectories, and selects trend trajectories for a yaw angle, a pitch angle, and a roll angle for the user. Using the selected trend trajectories, the video system predicts viewports of the 360-degree video for the user for future times.2021-02-04
20210037228AUGMENTED OPTICAL IMAGING SYSTEM FOR USE IN MEDICAL PROCEDURES - An optical imaging system for imaging a target during a medical procedure is disclosed. The optical imaging system includes: a first camera for capturing a first image of the target; a second wide-field camera for capturing a second image of the target; at least one path folding mirror disposed in an optical path between the target and a lens of the second camera; and a processing unit for receiving the first image and the second image, the processor being configured to: apply an image transform to one of the first image and the second wide-field image; and combine the transformed image with the other one of the images to produce a stereoscopic image of the target.2021-02-04
20210037229HYBRID IMAGING SYSTEM FOR UNDERWATER ROBOTIC APPLICATIONS - Hybrid imaging system for 3D imaging of an underwater target, comprising: two optical image sensors for stereoscopic imaging; a switchable structured-light emitter having different wavelengths; a switchable spatially non-coherent light source; a data processor configured for alternating between operating modes which comprise: a first mode wherein the structured-light emitter is activated, the light source is deactivated and the image sensors are activated to capture reflected light from the structured-light emitter, and a second mode wherein the structured-light emitter is deactivated, the light source is activated and the image sensors are activated to capture reflected light from the light source; wherein the data processor is configured for delaying image sensor capture, on the activation of the structured-light emitter and on the activation of the light source, for a predetermined time such that light reflected from any point closer than the target is not captured.2021-02-04
20210037230MULTIVIEW INTERACTIVE DIGITAL MEDIA REPRESENTATION INVENTORY VERIFICATION - Inventory at a remote location may be verified by transmitting a security key associated with uniquely identifying object identification information from a verification server to a client machine at the remote location. The security key may then be used to generate a multi-view interactive digital media representation (MVIDMR) of the object that includes a plurality of images captured from different viewpoints. The MVIDMR may then be transmitted to the verification server.2021-02-04
20210037231IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - An image processing apparatus generates stereo pair full spherical images corresponding to a plurality of positions of a display device, on the basis of two or more viewpoint images for each position determined in accordance with the plurality of positions of the display device, from a plurality of viewpoint images captured at different locations.2021-02-04
20210037232DRIVING METHOD FOR CALCULATING INTERPUPILLARY DISTANCE AND RELATED HEAD-MOUNTED DEVICE - A driving method suitable for a head mounted device (HMD) is provided. The driving method includes the following operations: moving a first image capture unit and a second image capture unit of the HMD to respectively capture two left-eye images and two right-eye images; calculating a first eye relief according to at least one left-eye feature in the two left-eye images; calculating a second eye relief according to at least one right-eye feature in the two right-eye images; calculating an interpupillary distance (IPD) according to the first eye relief and the second eye relief; and adjusting, according to the IPD, a distance between a first lens and a second lens of the HMD.2021-02-04
20210037233PET TREAT - A composition and process for making pet food treats is described herein. Auxiliary ingredients are combined to form a meat mixture. The meat mixture is formed into portions. The portions of meat mixture are positioned on a chew stick that comprises rawhide. The pet treat gives the appearance of a grilled shish kabob, where the meat portions are meant for initial taste, while the chew stick will provide the dog with a longer-lasting chewing portion.2021-02-04
20210037234VIDEO CONFERENCING SYSTEM AND TRANSMITTER THEREOF - A video conferencing system is disclosed. The video conferencing system includes a transmitter and a receiver. When the transmitter is coupled to a port of an information processing device, the transmitter communicates with the information processing device to determine whether the port of the information processing device has video output function. When the above determination result is no, the transmitter emits a wireless signal. The transmitter is coupled to a display device and used to receive the wireless signal and provide a default warning message to the display device.2021-02-04
20210037235APPARATUS FOR INSPECTING ALIGNMENT OF OPTICAL DEVICE - The present disclosure relates to an apparatus for inspecting alignment of optical device including: an optical device comprising a housing having an optical path of the shape of a cylinder, a light source disposed inside the housing to provide an illumination light for inspection to a subject surface, and a collimating lens for converting the illumination light irradiated from the light source into a parallel light beam; and an alignment evaluator that is provided with a condensing lens for converting the parallel light beam into a focused light beam, and that is disposed in front of the optical device.2021-02-04
20210037236CAMERA IMAGE AND SENSOR FUSION FOR MIRROR REPLACEMENT SYSTEM - A measurement system for a vehicle including a first camera defining a field of view having a corresponding optical axis, and a motion detection sensor mechanically fixed to the first camera such that the motion detection sensor is configured to detect motion of the optical axis.2021-02-04
20210037237Video Processing Methods and Apparatuses for Processing Video Data Coded in Large Size Coding Units - Video data processing methods and apparatuses in a video encoding or decoding system for processing a slice partitioned into Coding Units (CUs). The video encoding or decoding system receives input data of a current CU and checks a size of the current CU. The residual of the current CU is set to zero or the current CU is coded in Skip mode if the size of the current CU is greater than a maximum size threshold value. A CU with zero residual implies coefficient level values in one or more corresponding transform blocks are all zero, complicated transform processing for large size blocks is therefore avoided.2021-02-04
20210037238IMAGE DECODING METHOD AND APPARATUS BASED ON INTER PREDICTION IN IMAGE CODING SYSTEM - An image decoding method that is performed by a decoding apparatus according to the present disclosure comprises the steps of: forming a merge candidate list based on neighbouring blocks of a current block; deriving costs with respect to merge candidates included in the merge candidate list; deriving a revised merge candidate list based on the costs with respect to the merge candidates; deriving movement information of the current block based on the revised merge candidate list; and performing prediction on the current block based on the movement information, wherein the neighbouring blocks include spatial neighbouring blocks and temporal neighbouring blocks.2021-02-04
20210037239METHOD AND APPARATUS FOR SETTING REFERENCE PICTURE INDEX OF TEMPORAL MERGING CANDIDATE - The present invention relates to a method and apparatus for setting a reference picture index of a temporal merging candidate. An inter-picture prediction method using a temporal merging candidate can include the steps of: determining a reference picture index for a current block; and inducing a temporal merging candidate block of the current block and calculating a temporal merging candidate from the temporal merging candidate block, wherein the reference picture index of the temporal merging candidate can be calculated regardless of whether a block other than the current block is decoded. Accordingly, a video processing speed can be increased and video processing complexity can be reduced.2021-02-04
20210037240EXTENDED MERGE PREDICTION - A video coding or decoding method includes using history-based motion vector prediction (HMVP) for conversion between multiple video blocks including a current block of video and a bitstream representation of the multiple video blocks such that for a uni-predicted block that for which a single reference picture is used for motion compensation, refraining from updating a look-up table for HMVP candidates for the uni-predicted block. The video coding or decoding method further includes performing the conversion using look-up tables for the multiple video blocks.2021-02-04
20210037241INTRA PREDICTION MODE MAPPING METHOD AND DEVICE USING THE METHOD - The present invention relates to an intra prediction mode mapping method and a device using the method. The intra prediction mode includes: decoding flag information providing information regarding whether an intra prediction mode of a plurality of candidate intra prediction modes for the current block is the same as the intra prediction mode for the current block, and decoding a syntax component including information regarding the intra prediction mode for the current block in order to induce the intra prediction mode for the current block if the intra prediction mode from among the plurality of candidate intra prediction modes for the current block is not the same as the intra prediction mode for the current block. Thus, it is possible to increase the efficiency with which are images are decoded.2021-02-04
20210037242Luma and Chroma Block Partitioning - A video coding mechanism is disclosed. The mechanism includes obtaining a coding tree unit including luma samples and chroma samples. The mechanism partitions the luma samples and the chroma samples according to a common coding tree when a size of a first coding tree node exceeds a threshold. The mechanism also partitions the luma samples with a luma coding sub-tree when a size of a second coding tree node is equal to or less than the threshold. The mechanism also chroma samples with a chroma coding sub-tree when a size of a third coding tree node is equal to or less than the threshold. The luma coding sub-tree contains a different set of split modes than the chroma coding sub-tree.2021-02-04
20210037243ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD - An encoder includes circuitry and memory. Using the memory, the circuitry: performs a transform process of (i) applying a first transform to a prediction residual signal indicating a difference between a current block to be encoded and a prediction image of the current block and (ii) further applying a second transform to a transform result of the first transform; and in the second transform, selects one transform basis (i) from a first group of candidates when a size of the current block is a first block size and (ii) from a second group of candidates when the size of the current block is a second block size different from the first block size, the first group including one or more candidates for a transform basis, the second group being different from the first group.2021-02-04
20210037244APPARATUSES AND METHODS FOR IMPROVED ENCODING OF IMAGES FOR BETTER HANDLING BY DISPLAYS - To allow better quality rendering of video on any display, a method is proposed of encoding, in addition to video data (VID), additional data (DD) comprising at least one change time instant (TMA_2021-02-04
20210037245METHOD AND APPARATUS FOR IMAGE ENCODING AND IMAGE DECODING USING TEMPORAL MOTION INFORMATION - Disclosed herein are a decoding method and apparatus and an encoding method and apparatus that perform inter-prediction using a motion vector predictor. For a candidate block in a col picture, a scaled motion vector is generated based on a motion vector of the candidate block. When the scaled motion vector indicates a target block, a motion vector predictor of the target block is generated based on the motion vector of the candidate block. The motion vector predictor is used to derive the motion vector of the target block in a specific inter-prediction mode such as a merge mode and an AMVP mode.2021-02-04
20210037246Adaptive Transfer Function for Video Encoding and Decoding - A video encoding and decoding system that implements an adaptive transfer function method internally within the codec for signal representation. A focus dynamic range representing an effective dynamic range of the human visual system may be dynamically determined for each scene, sequence, frame, or region of input video. The video data may be cropped and quantized into the bit depth of the codec according to a transfer function for encoding within the codec. The transfer function may be the same as the transfer function of the input video data or may be a transfer function internal to the codec. The encoded video data may be decoded and expanded into the dynamic range of display(s). The adaptive transfer function method enables the codec to use fewer bits for the internal representation of the signal while still representing the entire dynamic range of the signal in output.2021-02-04
20210037247ENCODING AND DECODING WITH REFINEMENT OF THE RECONSTRUCTED PICTURE - An encoding method for a picture part encoded in a bitstream and reconstructed can involve refinement data encoded in the bitstream and determined based on at least one comparison of a version of a rate distortion cost computed using a data coding cost, a distortion between an original version of the picture part and a reconstructed picture part, and at least one other version involving one or more combinations of with or without a refinement by the refinement data, or a refinement either in or out of the decoding loop, or with or without a mapping or an inverse mapping.2021-02-04
20210037248IMAGE ENCODING DEVICE, IMAGE ENCODING METHOD, IMAGE DECODING DEVICE, AND IMAGE DECODING METHOD - There is provided an image encoding device, an image encoding method, an image decoding device, and an image decoding method by which the processing amount of an inter-prediction process using sub-blocks can be reduced. In the encoding device, identification information for identifying a sub-block size which represents the size of a sub-block to be used in an inter-prediction process is set, switching to a sub-block having the size is performed, and the inter-prediction process is performed to encode an image, whereby a bitstream including the identification information is generated. In the image decoding device, the identification information is parsed from the bitstream, switching to a sub-block having a size according to the identification information is performed, and an inter-prediction process is performed to decode the bitstream, whereby an image is generated. The present technique is applicable to an image encoding device for encoding images or to an image decoding device for decoding images, etc., for example.2021-02-04
20210037249METHOD AND DEVICE FOR SHARING A CANDIDATE LIST - The present invention relates to a method and device for sharing a candidate list. A method of generating a merging candidate list for a predictive block may include: producing, on the basis of a coding block including a predictive block on which a parallel merging process is performed, at least one of a spatial merging candidate and a temporal merging candidate of the predictive block; and generating a single merging candidate list for the coding block on the basis of the produced merging candidate. Thus, it is possible to increase processing speeds for coding and decoding by performing inter-picture prediction in parallel on a plurality of predictive blocks.2021-02-04
20210037250DYNAMIC VIDEO INSERTION BASED ON FEEDBACK INFORMATION - Techniques are provided for adaptively controlling an encoding device to allow dynamic insertion intra-coded video content based on feedback information. For example, at least a portion of a video slice of a video frame in a video bitstream can be determined to be missing or corrupted. Feedback information indicating at least the portion of the video slice is missing or corrupted can be sent to an encoding device. An updated video bitstream can be received from the encoding device in response to the feedback information. The updated video bitstream can include at least one intra-coded video slice having a size that is larger than the missing or corrupted video slice. The size of the at least one intra-coded video slice can be determined to cover the missing or corrupted slice and propagated error in the video frame caused by the missing or corrupted slice.2021-02-04
20210037251VIDEO ENCODING METHOD AND APPARATUS, VIDEO DECODING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM - This application relates to a video encoding method and apparatus, a video decoding method and apparatus, a computer device, and a storage medium. The computer device obtains a to-be-encoded current encoded block in a current video frame and then obtains a first reference block corresponding to the current encoded block in a reference video frame. Next, the computer device obtains, within the reference video frame, one or more second reference blocks matching the first reference block, the one or more second reference blocks and the first reference block being similar reference blocks. Finally, the computer device encodes the current encoded block according to the first reference block and the one or more second reference blocks, to obtain encoded data.2021-02-04
20210037252Rate Control in Video Coding - A method of rate control in coding of a video sequence to generate a compressed bit stream is provided that includes computing a sequence base quantization step size for a sequence of pictures in the video sequence, computing a picture base quantization step size for a picture in the sequence of pictures based on the sequence base quantization step size, a type of the picture, and a level of the picture in a rate control hierarchy, and coding the picture using the picture base quantization step size to generate a portion of the compressed bit stream.2021-02-04
20210037253VIDEO ENCODING METHOD AND APPARATUS, VIDEO DECODING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM - This application relates to a video encoding method performed at a computer device. The computer device obtains a current encoding block to be encoded in a current video frame, the current encoding block having a width and a height different from the width. The computer device determines, within the current video frame, target reference pixels corresponding to the current encoding block, a target quantity corresponding to the target reference pixels being the e-th power of one of the width and the height under a target numeral system, e being a positive integer, the target numeral system being a numeral system used for calculating a predicted value of the current encoding block. After obtaining a predicted value corresponding to the current encoding block according to the target reference pixels, the computer device performs video encoding on the current encoding block according to the predicted value, to obtain encoded data.2021-02-04
20210037254COMPOUND PREDICTION FOR VIDEO CODING - Generating a compound predictor block of a current block of video can include generating, for the current block, predictor blocks comprising a first predictor block including first predictor pixels and a second predictor block including second predictor pixels; using at least a subset of the first predictor pixels to determine a first weight for a first predictor pixel of the first predictor pixels; obtaining a second weight for a second predictor pixel of the second predictor pixels, where the second predictor pixel is co-located with the first predictor pixel; and generating the compound predictor block by combining the first predictor block and the second predictor block, where the predictor block includes a weighted pixel that is determined using a weighted sum of the first predictor pixel and the second predictor pixel using the first weight and the second weight, respectively.2021-02-04
20210037255SYSTEMS AND METHODS FOR PERFORMING MOTION VECTOR PREDICTION FOR VIDEO CODING USING MOTION VECTOR PREDICTOR ORIGINS - Systems and methods for performing motion vector prediction for video coding are disclosed. A motion vector predictor is determined based at least in part on motion information associated with a selected motion vector predictor origin and offset values corresponding to a selected sampling point. The sampling point is specified according to a set of direction and distance on a sampling map for the motion vector predictor origin. 2021-02-04
20210037256BI-PREDICTION WITH WEIGHTS IN VIDEO CODING AND DECODING - A video coding or decoding method includes using history-based motion vector prediction (HMVP) for conversion between multiple video blocks including a current block of video and a bitstream representation of the multiple video blocks such that for a uni-predicted block that for which a single reference picture is used for motion compensation, refraining from updating a look-up table for HMVP candidates for the uni-predicted block. The video coding or decoding method further includes performing the conversion using look-up tables for the multiple video blocks.2021-02-04
20210037257METHOD AND APPARATUS FOR INTER PREDICTION IN VIDEO CODING SYSTEM - The present disclosure relates to a method by which a decoding apparatus performs video coding, comprising the steps of: generating a motion information candidate list for a current block; selecting one candidate from among those included in the motion information candidate list; deriving control point motion vectors (CPMVs) of the current block based on the selected candidate; deriving sub-block-unit or sample-unit motion vectors of the current block based on the CPMVs; deriving a predicted block based on the motion vectors; and reconstructing a current picture based on the predicted block, wherein the motion information candidate list includes an inherited affine candidate, the inherited affine candidate is derived based on candidate blocks coded by affine prediction, from among spatial neighboring blocks of the current block, and the inherited affine candidate is generated up to a pre-defined maximum number.2021-02-04
20210037258GENERALIZED BI-PREDICTION FOR VIDEO CODING WITH REDUCED CODING COMPLEXITY - Exemplary embodiments include systems and methods for coding a video comprising a plurality of pictures including a current picture, a first reference picture, and a second reference picture, where each picture includes a plurality of blocks. In one method, for at least a current block in the current picture, a number of available bi-prediction weights is determined based at least in part on a temporal layer and/or a quantization parameter of the current picture. From among available bi-prediction weights a pair of weights are identified. Using the identified weights, the current block is then predicted as a weighted sum of a first reference block in the first reference picture and a second reference block in the second reference picture. Encoding techniques are also described for efficient searching and selection of a pair of bi-prediction weights to use for prediction of a block.2021-02-04
20210037259VIDEO SIGNAL PROCESSING METHOD AND APPARATUS - Disclosed are a video signal processing method and apparatus for encoding or decoding a video signal. In more detail, the video signal processing method includes receiving intra prediction mode information for a current block, wherein the intra prediction mode information indicates one of a plurality of intra prediction modes configuring an intra prediction mode set; and decoding the current block based on the received intra prediction mode information, wherein the intra prediction mode set comprises a plurality of angle modes, and the plurality of angle modes comprises a basic angle mode and an extended angle mode, wherein the extended angle mode is signaled based on the basic angle mode.2021-02-04
20210037260VIDEO CODING USING MAPPED TRANSFORMS AND SCANNING MODES - A video encoder may transform residual data by using a transform selected from a group of transforms. The transform is applied to the residual data to create a two-dimensional array of transform coefficients. A scanning mode is selected to scan the transform coefficients in the two-dimensional array into a one-dimensional array of transform coefficients. The combination of transform and scanning mode may be selected from a subset of combinations that is based on an intra-prediction mode. The scanning mode may also be selected based on the transform used to create the two-dimensional array. The transforms and/or scanning modes used may be signaled to a video decoder.2021-02-04
20210037261Methods and Apparatus for Simplification of Coding Residual Blocks - Method and apparatus for encoding and decoding prediction residues in a video coding system also disclosed. At the decoder side, a Rice parameter for the target transform coefficient is determined based on a local sum of absolute levels of neighboring transform coefficients of the target transform coefficient. A dependent quantization state is determined and a zero-position variable is determined based on the dependent quantization state and the Rice parameter. One or more coded bits associated with a first syntax element for the target transform coefficient in a transform block are parsed and decoded using one or more codes including a Golomb-Rice code with the Rice parameter, where the first syntax element corresponds to a modified absolute level value of the target transform coefficient. An absolute level value of the target transform coefficient is derived according to the zero-position variable and the first syntax element.2021-02-04
20210037262SCALABLE TRANSFORM PROCESSING UNIT FOR HETEROGENEOUS DATA - There is provided a processing unit device comprising: at least one control unit for controlling operations of the processing unit device; and a transform logic unit comprising at least one transform block associated with a transform to be executed by the at least one control unit, the transform comprising an effect to be applied to an output site contained in an output universe, each one of the at least one transform block comprising an effect block and an outsite block, the effect block comprising at least one first storing unit for storing thereon information relative to the effect and the outsite block comprising at least one second storing unit for storing thereon information relative to the output site.2021-02-04
20210037263BLOCK-BASED PARALLEL DEBLOCKING FILTER IN VIDEO CODING - Deblocking filtering is provided in which an 8×8 filtering block covering eight sample vertical and horizontal boundary segments is divided into filtering sub-blocks that can be independently processed. To process the vertical boundary segment, the filtering block is divided into top and bottom 8×4 filtering sub-blocks, each covering a respective top and bottom half of the vertical boundary segment. To process the horizontal boundary segment, the filtering block is divided into left and right 4×8 filtering sub-blocks, each covering a respective left and right half of the horizontal boundary segment. The computation of the deviation d for a boundary segment in a filtering sub-block is performed using only samples from rows or columns in the filtering sub-block. Consequently, the filter on/off decisions and the weak/strong filtering decisions of the deblocking filtering are performed using samples contained within individual filtering blocks, thus allowing full parallel processing of the filtering blocks.2021-02-04
20210037264ENTROPY DECODING METHOD, AND DECODING APPARATUS USING SAME - The present invention relates to an entropy encoding method, an entropy decoding method and to an apparatus using same. The entropy decoding method includes: a step of decoding a bin of a syntax element; and a step of acquiring information on the syntax element based on the decoded bin. In the step of decoding the bin, context-based decoding or bypass decoding is performed for each bin of the syntax element.2021-02-04
20210037265INHERITANCE IN SAMPLE ARRAY MULTITREE SUBDIVISION - A better compromise between encoding complexity and achievable rate distortion ratio, and/or to achieve a better rate distortion ratio is achieved by using multitree sub-divisioning not only in order to subdivide a continuous area, namely the sample array, into leaf regions, but using the intermediate regions also to share coding parameters among the corresponding collocated leaf blocks. By this measure, coding procedures performed in tiles—leaf regions—locally, may be associated with coding parameters individually without having to, however, explicitly transmit the whole coding parameters for each leaf region separately. Rather, similarities may effectively exploited by using the multitree subdivision.2021-02-04
20210037266METHOD FOR PROCESSING IMAGE AND DEVICE THEREFOR - Disclosed are a method for decoding a video signal and an apparatus therefor. Specifically, a method for decoding an image may include: partitioning a current coding tree block into a plurality of coding blocks so that coding blocks partitioned from the current coding tree block are included in a current picture when the current coding tree block is out of a boundary of the current picture; parsing a first syntax element indicating whether a current coding block is partitioned into a plurality of subblocks when the current coding block satisfies a predetermined condition; and determining a split mode of the current coding block based on the syntax element.2021-02-04
20210037267System for Streaming - The present invention relates to systems, methods, software applications and devices for the broadcasting of short-lived personal internet radio stations. Embodiments include software applications for live broadcasting over an internet network comprising a host mode and a recipient mode, wherein the software application may be executed on a connected mobile device to broadcast to many other recipients in a substantially synchronous manner. Embodiments provide for the live broadcasting of music and video content from a personal device to many listener or viewer devices, where content may be sourced from any number of locally stored or cloud-based content repositories.2021-02-04
20210037268AUDIO-BASED AUTOMATIC VIDEO FEED SELECTION FOR A DIGITAL VIDEO PRODUCTION SYSTEM - A video production device is deployed to produce a video production stream of an event occurring within an environment that includes a plurality of different video capture devices capturing respective video input streams of the event. The video production device is programmed and operated to: receive video input streams from the video capture devices; determine, for each video capture device, an average root mean square (RMS) audio energy value over a period of time, to obtain device-specific average RMS values for the video capture devices; compare each device-specific average RMS value against a respective device-specific energy threshold value; identify which input stream is associated with an active speaker, based on the comparing step; select one of the identified streams as a current video output stream; and provide the selected stream as the current video output stream.2021-02-04
20210037269TRANSMISSION DEVICE, TRANSMISSION METHOD, RECEPTION DEVICE, AND RECEPTION METHOD - To secure easiness of component selection at a reception side. A transmission stream is generated in which a first transmission packet including a predetermined component and a second transmission packet including signaling information related to the predetermined component are multiplexed in a time division manner.2021-02-04
20210037270SYSTEM AND TECHNIQUES FOR DIGITAL DATA LINEAGE VERIFICATION - Disclosed are examples for providing functions to receive a media file to be stored in a media repository. In the examples, a location in the media repository may be assigned to the media file. A media file address in a blockchain platform may be assigned to the media file. Metadata including the assigned location in the media repository and the assigned media file address in the blockchain platform may be added to the media file. A media file hash value may be generated by applying a hash function to the media file including the metadata. The media file hash value may be included in a message and uploaded to the assigned media file address in the blockchain platform as a transaction in the blockchain. An indication that the media file is uploaded to the media repository may be delivered to a subscriber device from which the media file was received.2021-02-04
20210037271CROWD RATING MEDIA CONTENT BASED ON MICRO-EXPRESSIONS OF VIEWERS - In some examples, a computing device initiates playback of media content on a display device. The computing device receives one or more images from a camera having a field of view that includes one or more viewers of the display device. The computing device may analyze at least one of the images and determine, based on the analysis, a micro-expression being expressed by at least one of the viewers. The computing device may determine a sentiment based on the micro-expression. A timestamp derived from the one or more images may be associated with the sentiment and sent to a server to create a sentiment map of the media content. If the sentiment matches a pre-specified sentiment then the computing device may skip playback of a remainder of a current portion of the media content that is being displayed and initiate playback of a next portion of the media content.2021-02-04
20210037272CONTENT DISTRIBUTION AND MOBILE WIRELESS MESH NETWORKS - According to one configuration, a wireless base station has access to a cache (repository) that stores a stream of content including multiple segments of content. The cache stores (caches) a first segment of content from the received stream of content. The first segment of content is cached in the repository for a window of time during which the first segment of content is temporarily available from the wireless base station. In response to receiving a respective request from each mobile communication device in a group of multiple mobile communication devices requesting the first segment of content during the window of time, a wireless base station communicates the first segment of content from the cache to each mobile communication device in the group.2021-02-04
20210037273COLLABORATIVE VIDEO STITCHING - Methods and systems of creating a collaborative video for a recipient are disclosed. A video stitching server may receive a collaborative video request from a collaborative video application running on an initiator's mobile device. The video stitching server may store the collaborative video request in a folder that has an identifier. The identifier may be provided to potential contributors. The video stitching server may receive contributor video information—including contributor videos and the identifier—from contributors' collaborative video applications and may store the contributor videos in the folder based on the identifier. The video stitching server may then stitch together at least two videos stored in the folder to create a collaborative video and may deliver the collaborative video to the collaborative video application miming on a recipient's mobile device.2021-02-04
20210037274SYSTEMS AND METHODS FOR CORRECTING ERRORS IN CAPTION TEXT - Systems and methods are described to address shortcomings in conventional systems by correcting an erroneous term in on-screen caption text for a media asset. In some aspects, the systems and methods identify the erroneous term in a text segment of the on-screen caption text, and identify one or more video frames of the media asset corresponding to the text segment. The systems and methods further identify a contextual term related to the erroneous term from the one or more video frames. By accessing a knowledge graph, the systems and methods identify a candidate correction based on the contextual term and a portion of the text segment. Lastly, the systems and methods replaces the erroneous term with the candidate correction.2021-02-04
20210037275SYSTEM AND METHOD FOR TRANSFERRING LARGE VIDEO FILES WITH REDUCED TURNAROUND TIME - Embodiments of the present disclosure provide methods, systems and computer program products for transfer of video signals at a destination with reduced turnaround time. According to one embodiment, a method includes performing transfer of a series of video chunks of a video signal, each video chunk of the series of video chunks comprising a sequence of video frames, wherein for each video chunk, one or more processors perform a processing cycle comprising: receiving the sequence of video frames from a source; processing the received sequence of video frames to generate a processed sequence of video frames, wherein receiving of a consecutive video chunk of the series of video chunks comprising a consecutive sequence of video frames is initiated simultaneously while initiating said processing of the received sequence of video frames; and transmitting the processed sequence of video frames for consumption at a destination.2021-02-04
20210037276METHODS AND SYSTEMS FOR LOW LATENCY STREAMING - Methods and systems are described for low latency streaming. A computing device may receive a chunk of content. The computing device may determine whether a transmission duration of the chunk of the content satisfies a threshold. The computing device may determine a bitrate based on the transmission duration satisfying a threshold.2021-02-04
20210037277METHOD AND DEVICE FOR TRANSMITTING AND RECEIVING BROADCAST SERVICE IN HYBRID BROADCAST SYSTEM ON BASIS OF CONNECTION OF TERRESTRIAL BROADCAST NETWORK AND INTERNET PROTOCOL NETWORK - The present invention relates to a device for receiving a hybrid broadcast service and a method for transmitting the same. The device for receiving a hybrid broadcast service, according to one embodiment of the present invention, comprises: a first reception unit for receiving a first broadcast signal transmitted through a first network; a second reception unit for receiving a second broadcast signal transmitted through a second network, wherein the broadcast signal includes a service information table; a signaling information processing unit for extracting the service information table from the broadcast signal, wherein the service information table includes a component identifier descriptor for signaling information for identifying each of a plurality of components constituting one broadcast service, the component identifier descriptor including identification information; and an audio/video processing unit for acquiring the broadcast service including the plurality of components by using the component identification information, and decoding and reproducing the acquired broadcast service.2021-02-04
Website © 2025 Advameg, Inc.