10th week of 2022 patent applcation highlights part 54 |
Patent application number | Title | Published |
20220076392 | METHOD FOR X-RAY DENTAL IMAGE ENHANCEMENT - Various examples are directed to apparatus and methods for enhancing an X-ray medical image. In one example, a method for X-ray dental images enhancement is provided. The method includes receiving an input image and applying the noise reduction trough Recursive Locally-Adaptive Wiener Filtration (RLA-WF) based on the sliding window, which is calculated following a recursive two-dimensional scheme; contrast enhancement to the filtered image, including using recursive Sliding Window Contrast Limited Adaptive Histogram Equalization (SW-CLAHE) utilizing a Truncation Threshold Surface (TTS) for calculation of the truncation threshold for the local histogram in each sliding window. The method also includes applying sharpness enhancement to the input image, including using Fast Recursive Adaptive Unsharp Masking (FRA-UM) utilizing a sliding window with two or one-dimensional recursion, to obtain a sharpened image. The contrast enhanced image is linearly mixed with the sharpened image using a control coefficient to obtain an output image, and the output image is provided to a display of a user. | 2022-03-10 |
20220076393 | SYSTEMS AND METHODS FOR IMAGE PROCESSING - Systems and methods for image processing are provided in the present disclosure. The systems may generate a preliminary image by filtering image data generated by an image acquisition device. The system may generate an intermediate image by performing, based on a first objective function, a first iterative operation on the preliminary image. The first objective function may include a first term associated with a first difference between the intermediate image and the preliminary image, a second term associated with continuity of the intermediate image and a third term associated with sparsity of the intermediate image. The systems may also generate a target image by performing, based on a second objective function, a second iterative operation on the intermediate image. The second objective function may be associated with a system matrix of the image acquisition device and a second difference between the intermediate image and the target image. | 2022-03-10 |
20220076394 | SYSTEMS AND METHODS FOR IMAGE PROCESSING - The present disclosure relates to systems and methods for image processing. The system may obtain low-frequency component of a first image. For each element of the first image, the system may adjust a luminance of the element in response to determining that the luminance of the element is less than a predetermined luminance threshold. The system may determine a first luminance weight map corresponding to the first image based on the adjusted luminance of each element of the first image. The system may obtain low-frequency component of a second image and determine a second luminance weight map corresponding to the second image based on a luminance of each element of the second image. The system may further determine a fused image based on the low-frequency component of the first image, the first luminance weight map, the low-frequency component of the second image, and the second luminance weight map. | 2022-03-10 |
20220076395 | Microscopy System and Method for Generating an HDR Image - A microscopy system for generating an HDR image comprises a microscope for capturing a plurality of raw images that show a scene with different image brightnesses and a computing device configured to generate the HDR image by combining at least two of the raw images. The computing device is also configured to set coefficients for different regions in the raw images depending on objects depicted in those regions, wherein it is defined via the coefficients whether and to what extent pixels of the raw images are incorporated in the HDR image. The computing device is further configured to generate the HDR image by combining the raw images based on the set coefficients. A method generates an HDR image by combining raw images based on coefficients that are set depending on objects depicted in the raw images. | 2022-03-10 |
20220076396 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM - An image processing device includes at least one processor configured to acquire target data as an image processing target and a designated parameter designated by a user to receive a request for image processing from the user, extract, from a storage unit in which subjects of processed image data previously subjected to the image processing and applied parameters applied to the processed image data are accumulated in association with each other, the applied parameter applied to the processed image data having a subject similar to the target data, derive a use frequency of the designated parameter among the extracted applied parameters based on parameter values of the extracted applied parameters, and output the use frequency of the designated parameter. | 2022-03-10 |
20220076397 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus comprises: an image generating unit configured to generate, by using a radiation energy spectrum, a material characteristic image of a material contained in a plurality of radiation images captured by different radiation energies; and a noise reduction processing unit configured to reduce a noise component of the material characteristic image. The image generating unit uses a composite image obtained from the plurality of radiation images, a first material characteristic image in which the noise component has been reduced, and a composite spectrum obtained from spectra of the different radiation energies to generate a second material characteristic image with reduced noise. | 2022-03-10 |
20220076398 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - An information processing apparatus has an acquisitor configured to acquire an entire area image obtained by capturing an entire area of a processing surface of a wafer including at least one defect, a training image selector configured to select, as a training image, a partial image including at least one defect from the entire area image, a model constructor configured to construct a calculation model of generating a label image obtained by extracting and binarizing the defect included in the partial image, and a learner configured to update a parameter of the calculation model based on a difference between the label image generated by inputting the training image to the calculation model and a reference label image obtained by extracting and binarizing the defect of the training image. | 2022-03-10 |
20220076399 | PHOTOGRAPHING GUIDE DEVICE - A photographing guide device includes a detection means and an output means. The detection means compares a candidate image of a structure captured from a capturing candidate position in a capturing candidate direction with a registered image of the structure captured from a given capturing position in a given capturing direction and stored in advance, thereby detecting a positional difference between the capturing candidate position and the given capturing position. The output means outputs information indicating the detected positional difference. | 2022-03-10 |
20220076400 | ENDOSCOPE SYSTEM, METHOD FOR ACTIVATING ENDOSCOPE SYSTEM, AND IMAGE PROCESSING APPARATUS - An actual measurement value calculation unit calculates a first actual measurement value of oxygen saturation of a tissue to be observed. A reference value calculation unit calculates a first reference value of the oxygen saturation of the tissue to be observed. A relative value calculation unit calculates a relative value of the first actual measurement value with reference to the first reference value. An image generation unit generates an image of the relative value of the first actual measurement value on the basis of an evaluation color table to generate an evaluation oxygen-saturation image. A display unit displays the evaluation oxygen-saturation image. | 2022-03-10 |
20220076401 | SYSTEMS AND METHODS FOR QUANTIFYING LIGHT FLARES IN IMAGES - Systems, methods, and computer-readable media are disclosed for identifying light flares in images. An example method may involve receiving an image from an imaging device, the image including data indicative of a flare artifact originating from a region of the image. The example method may also involve determining, based on the image data, a first array of pixels extending radially outwards from the region and a second array of pixels extending radially outwards from the region. The example method may also involve creating, based on the image data, a flare array, the flare array including the first array of pixels and the second array of pixels. The example method may also involve determining, based on the flare array, a peak flare artifact value indicative of a size of the flare artifact. The example method may also involve determining, based on the peak flare artifact value, a flare artifact score for the imaging device. | 2022-03-10 |
20220076402 | HIGH BANDWIDTH CAMERA DATA TRANSMISSION - A method includes obtaining multiple images captured by pixel sensors of an image sensor, analyzing, using neural network circuitry integrated in the image sensor, the multiple images for object detection, generating, for each of the multiple images using the neural network circuitry integrated in the image sensor, neural network output data related to results of the analysis of the multiple images for object detection, and transmitting, from the image sensor, the neural network output data for each of the multiple images and image data for a subset of the multiple images instead of image data of each of the multiple images. | 2022-03-10 |
20220076403 | IMAGE-BASED SENSOR FOR MEASURING ROTATIONAL POSITION OF A ROTATING SHAFT - Non-contact sensors include an image sensor configured to capture image data of a portion of a surface of a rotatable shaft and an electronic control unit communicatively coupled to the image sensor. The electronic control unit is configured to receive image data having a plurality of frames from the image sensor and store the image data in a memory component of the electronic control unit, determine a transformation in image space between one or more surface features that appear in a first frame of the image data and the same one or more surface features that appear in a second frame of the image data, determine a rotational position of the rotatable shaft at a time of capture of the second frame of the image data based on the transformation and a quantitatively characterized relationship between image space and object space, and store the rotational position of the rotatable shaft. | 2022-03-10 |
20220076404 | DEFECT MANAGEMENT APPARATUS, METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM - According to one embodiment, a defect management apparatus includes a processor. The processor acquires first information and second information, the first information including first defect positions relating to defects detected with a first device for an inspection target and corresponding first labels indicating classifications of the defects, the second information including second defect positions relating to defects detected with a second device for the inspection target. The processor determines a first defect position corresponding to a second defect position as a corresponding defect position. The processor diverts the first label corresponding to the corresponding defect position as a second label of the second defect position. | 2022-03-10 |
20220076405 | GENERATING TRAINING DATA FOR ESTIMATING MATERIAL PROPERTY PARAMETER OF FABRIC AND ESTIMATING MATERIAL PROPERTY PARAMETER OF FABRIC - Estimating a material property parameter of fabric involves receiving information including a three-dimensional (3D) contour shape of fabric placed over a 3D geometric object, estimating a material property parameter of the fabric used for representing drape shapes of 3D clothes made by the fabric by applying the information to a trained artificial neural network, and providing the material property parameter of the fabric. | 2022-03-10 |
20220076406 | UNSUPERVISED PATTERN SYNONYM DETECTION USING IMAGE HASHING - Images of semiconductor wafers can be hashed to determine a fixed length hash string for each of the images. Pattern synonyms can be determined from the hash strings. The pattern synonyms can be grouped. A degree of similarity between images in the groups is adjustable via a hamming distance. This can be used for various applications, including determination of latent defects. | 2022-03-10 |
20220076407 | SYSTEMS AND METHODS FOR GENERATING A SINGLE OBSERVATION IMAGE TO ANALYZE COATING DEFECTS - Systems and methods for automatic detection of defects in a coating of a component are provided. In one aspect, a coating inspection system is provided. The coating inspection system includes a heating element operable to impart heat to the component as it traverses relative thereto. An imaging device of the system captures images of the component as the heating element traverses relative to the component and applies heat thereto. The images indicate the transient thermal response of the component. The system can generate a single observation image using the captured images. The system can detect and analyze defects using the generated single observation image. | 2022-03-10 |
20220076408 | Systems and Methods for Building a Muscle-to-Skin Transformation in Computer Animation - An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.” | 2022-03-10 |
20220076409 | Systems and Methods for Building a Skin-to-Muscle Transformation in Computer Animation - An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.” | 2022-03-10 |
20220076410 | COMPUTER SUPPORTED REVIEW OF TUMORS IN HISTOLOGY IMAGES AND POST OPERATIVE TUMOR MARGIN ASSESSMENT - A computer apparatus and method for identifying and visualizing tumors in a histological image and measuring a tumor margin are provided. A CNN is used to classify pixels in the image according to whether they are determined to relate to non-tumorous tissue, or one or more classes for tumorous tissue. Segmentation is carried out based on the CNN results to generate a mask that marks areas occupied by individual tumors. Summary statistics for each tumor are computed and supplied to a filter which edits the segmentation mask by filtering out tumors deemed to be insignificant. Optionally, the tumors that pass the filter may be ranked according to the summary statistics, for example in order of clinical relevance or by a sensible order of review for a pathologist. A visualization application can then display the histological image having regard to the segmentation mask, summary statistics and/or ranking. Tumor masses extracted by resection are painted with an ink to highlight its surface region. The CNN is trained to distinguish ink and no-ink tissue as well as tumor and no-tumor tissue. The CNN is applied to the histological image to generate an output image whose pixels are assigned to the tissue classes. Tumor margin status of the tissue section is determined by the presence or absence of tumor-and-ink classified pixels. Tumor margin involvement and tumor margin distance are determined by computing additional parameters based on classification-specified inter-pixel distance parameters. | 2022-03-10 |
20220076411 | NEURAL NETORK BASED IDENTIFICATION OF AREAS OF INTEREST IN DIGITAL PATHOLOGY IMAGES - A CNN is applied to a histological image to identify areas of interest. The CNN classifies pixels according to relevance classes including one or more classes indicating levels of interest and at least one class indicating lack of interest. The CNN is trained on a training data set including data which has recorded how pathologists have interacted with visualizations of histological images. In the trained CNN, the interest-based pixel classification is used to generate a segmentation mask that defines areas of interest. The mask can be used to indicate where in an image clinically relevant features may be located. Further, it can be used to guide variable data compression of the histological image. Moreover, it can be used to control loading of image data in either a client-server model or within a memory cache policy. Furthermore, a histological image of a tissue sample of a tissue type that has been treated with a test compound is image processed in order to detect areas where toxic reactions to the test compound may have occurred. An autoencoder is trained with a training data set comprising histological images of tissue samples which are of the given tissue type, but which have not been treated with the test compound. The trained autoencoder is applied to detect tissue areas by their deviation from the normal variation seen in that tissue type as learnt by the training process, and so build up a toxicity map of the image. The toxicity map can then be used to direct a toxicological pathologist to examine the areas identified by the autoencoder as lying outside the normal range of heterogeneity for the tissue type. This makes the pathologists review quicker and more reliable. The toxicity map can also be overlayed with the segmentation mask indicating areas of interest. When an area of interest and an area identified as lying outside the normal range of heterogeneity for the tissue type, and increased confidence score is applied to the overlapping area. | 2022-03-10 |
20220076412 | IMAGE PROCESSING METHOD, IMAGE DISPLAY METHOD, IMAGE PROCESSING DEVICE, IMAGE DISPLAY DEVICE, IMAGE PROCESSING PROGRAM, AND IMAGE DISPLAY PROGRAM - A non perfusion area is detected from a fundus image. A fundus image is acquired, a first non perfusion area in a first region of the fundus is extracted from the fundus image, and a second non perfusion area in a second region of the fundus is extracted from the fundus image. | 2022-03-10 |
20220076413 | MRI Post-Processing Systems and Methods - In some embodiments, spinal disc degeneracy is diagnosed according to a decay variance map generated by determining a variance in T2 decay over time within each pixel or pixel subset of an MRI image. A region of interest may be defined as including nucleus pulposus (NP) and annulus fibrosus (AF) areas, and excluding cartilaginous endplate (EP) areas of a disc. A decay variance for a pixel or groups of pixels is calculated by determining ratios between consecutive intensity values of the T2 signal, determining differences between consecutive ratios, and summing the absolute values of the determined differences. Decay variance mapping may be used to diagnose degeneracy in other tissues, such as in joints. | 2022-03-10 |
20220076414 | METHOD TO READ CHEST IMAGE - According to an embodiment of the present disclosure, disclosed is a method to read a chest image. The method includes: determining whether or not to identify presence of cardiomegaly for a chest image; detecting a lung region and a heart region respectively which are included in the chest image, by using a neural network model, when it is determined to identify presence of cardiomegaly of the chest image; and calculating a cardiothoracic ratio of the chest image using the detected lung region and the detected heart region. | 2022-03-10 |
20220076415 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM - An image processing apparatus comprises a planar distribution obtaining configured to, using a plurality of radiation images captured using different radiation energies as input images, obtain an output image representing a planar distribution for a substance contained in the input image, based on a likelihood and a prior probability using one of a pixel value of a pixel of interest or a pixel value of a peripheral pixel as an input, the planar distribution obtaining unit decides the output image such that a probability that an output is a value of the planar distribution when an input is the input image is maximized. | 2022-03-10 |
20220076416 | SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES - An image processing method including identifying, using a machine learning system, an area of interest of a target image by analyzing microscopic features extracted from multiple image regions in the target image, the machine learning system being generated by processing a plurality of training images each comprising an image of human tissue and a diagnostic label characterizing at least one of a slide morphology, a diagnostic value, a pathologist review outcome, and an analytic difficulty; determining, using the machine learning system, a probability of a target feature being present in the area of interest of the target image based on an average probability; and determining, using the machine learning system, a prioritization value, of a plurality of prioritization values, of the target image based on the probability of the target feature being present in the target image. | 2022-03-10 |
20220076417 | VISION SCREENING SYSTEMS AND METHODS - A system includes system housing, an eccentric radiation source, and a radiation sensor. The radiation produced by the eccentric radiation source can be collected by the radiation sensor to generate images of retinas for a patient. The system also includes a vision screening device connected with the eccentric radiation source and the radiation sensor via the system house that can control and synchronize actions for the eccentric radiation source and the radiation sensor. The vision screening device further analyzes the images generated by the radiation sensor via neural network algorithms to determine spherical error slopes, refractive errors, and recommendations for the patient. | 2022-03-10 |
20220076418 | ANALYZING IMAGE DATA TO DETERMINE BOWEL PREPARATION QUALITY - An analyzing platform may obtain a first image of a liquid in a receptacle. The analyzing platform may analyze the first image to determine a first set of visual characteristics concerning the liquid. The analyzing platform may obtain a second image of a rectal sample in the liquid in the receptacle, wherein the rectal sample originated from a bowel of a subject. The analyzing platform may analyze the second image to determine a second set of visual characteristics concerning the rectal sample. The analyzing platform may determine, based on the first set of visual characteristics and the second set of visual characteristics, rectal sample information. The analyzing platform may cause one or more actions to be performed based on the rectal sample information. | 2022-03-10 |
20220076419 | ULTRASONIC DIAGNOSTIC DEVICE AND STORAGE MEDIUM - An ultrasonic diagnostic device of an embodiment includes processing circuitry. The processing circuitry acquires an ultrasonic image of a subject, executes processing for each step for each of one or more examination procedures registered in advance and including one or more steps including processing of acquiring the ultrasonic image, in which set images are associated, and registered, and determines a recommended step recommended to be executed subsequent to a current step or instead of the current step on the basis of a degree of consistency between a set image registered for each step and an ultrasonic image acquired in the current step that is being executed, and executes the recommended step subsequent to the current step or instead of the current step when the recommended step has been determined during execution of the current step. | 2022-03-10 |
20220076420 | RETINOPATHY RECOGNITION SYSTEM - Some embodiments of the disclosure provide a diabetic retinopathy recognition system (S) based on fundus image. According to an embodiment, the system includes an image acquisition apparatus ( | 2022-03-10 |
20220076421 | METHOD FOR IDENTIFYING BONE IMAGES - The present invention has as its objective a procedure for assisting a forensic expert in making decisions in order to identify subjects by comparing images of rigid anatomical structures. This procedure includes a decision-making stage based on a hierarchical analysis model that, in particular realizations, is complemented by a previous stage of segmentation of osseous images and their superimposition. | 2022-03-10 |
20220076422 | SYSTEMS AND METHODS FOR GENERATING CANCER PREDICTION MAPS FROM MULTIPARAMETRIC MAGNETIC RESONANCE IMAGES USING DEEP - Various example embodiments are described in which an anisotropic encoder-decoder convolutional neural network architecture is employed to process multiparametric magnetic resonance images for the generation of cancer predication maps. In some example embodiments, a simplified anisotropic encoder-decoder convolutional neural network architecture may include an encoder portion that is deeper than a decoder portion. In some example embodiments, simplified network architectures may be combined with test-time-augmentation in order to facilitate training and testing with a minimal number of test subjects. | 2022-03-10 |
20220076423 | METHOD FOR PROPERTY FEATURE SEGMENTATION - The method for determining property feature segmentation includes: receiving a region image for a region; determining parcel data for the region; determining a final segmentation output based on the region image and parcel data using a trained segmentation module; optionally generating training data; and training a segmentation module using the training data S | 2022-03-10 |
20220076424 | VIDEO SEGMENTATION BASED ON DETECTED VIDEO FEATURES USING A GRAPHICAL MODEL - Embodiments are directed to video segmentation based on detected video features. More specifically, a segmentation of a video is computed by determining candidate boundaries from detected feature boundaries from one or more feature tracks; modeling different segmentation options by constructing a graph with nodes that represent candidate boundaries, edges that represent candidate segments, and edge weights that represent cut costs; and computing the video segmentation by solving a shortest path problem to find the path through the edges (segmentation) that minimizes the sum of edge weights along the path (cut costs). A representation of the video segmentation is presented, for example, using interactive tiles or a video timeline that represent(s) the video segments in the segmentation. | 2022-03-10 |
20220076425 | ROBOTIC SYSTEM WITH AUTOMATED PACKAGE REGISTRATION MECHANISM AND AUTO-DETECTION PIPELINE - The present disclosure relates to detecting and registering unrecognized or unregistered objects. A minimum viable range (MVR) may be derived based on inspecting image data that represents objects. The MVR may be determined to be a certain MVR or an uncertain MVR according to one or more features represented in the image data. The MVR may be used to register corresponding objects according to the certain or uncertain determination. | 2022-03-10 |
20220076426 | METHOD FOR SEGMENTING AN IMAGE - The invention relates to a computer-implemented segmentation method of an image comprising a segmentation by watershed applied to an evaluation of a curvature at pixels of the image. | 2022-03-10 |
20220076427 | APPARATUS AND METHOD FOR IMAGE REGION DETECTION OF OBJECT BASED ON SEED REGIONS AND REGION GROWING - An image processing apparatus includes a memory configured to store first region detection information of a first frame; and a processor. The processor is configured to: identify a first pixel region corresponding to the first region detection information from a second frame, perform region growing processing on the first pixel region based on an adjacent pixel region that is adjacent to the first pixel region, obtain second region detection information of the second frame, based on the region growing processing, and perform image processing on the second frame based on the second region detection information. | 2022-03-10 |
20220076428 | PRODUCT POSITIONING METHOD - A product positioning method includes: collecting a product picture; performing integral image calculation on the product picture; and acquiring, according to the calculated integral image, coordinates of each vertex in the product picture by means of differential calculation. According to the present application, an integral image algorithm is applied to product positioning, such that when the product picture quality is not high, for example, the picture is blurry, and it is thus not convenient to position a product by using a picture edge algorithm or a template matching algorithm, the product picture and a background region can be quickly divided by using the integral image algorithm, thereby positioning the product and not being limited by poor picture quality. | 2022-03-10 |
20220076429 | ENHANCED OPTICAL FLOW ESTIMATION USING A VARIED SCAN ORDER - In various examples, optical flow estimate (OFE) quality is improved when employing a hint-based algorithm in multi-level hierarchical motion estimation by using different scan orders at different resolution levels. A scan of an image performed with a scan order may initially leverage OFEs from a previous scan of the image, where the previous scan was performed using a different scan order. The OFEs leveraged from the previous scan are more likely to be of high accuracy until sufficient spatial hints are available to the hint-based algorithm for the scan to reduce the impact of potentially lower quality OFEs resulting from the different scan order of the previous scan. | 2022-03-10 |
20220076430 | HEATMAP AND ATLAS - A dynamic anatomic atlas is disclosed, comprising static atlas data describing atlas segments and dynamic atlas data comprising information on a dynamic property which information is respectively linked to the atlas segments. | 2022-03-10 |
20220076431 | SYSTEM AND METHOD FOR FORECASTING LOCATION OF TARGET IN MONOCULAR FIRST PERSON VIEW - This disclosure relates generally to system and method for forecasting location of target in monocular first person view. Conventional systems for location forecasting utilizes complex neural networks and hence are computationally intensive and requires high compute power. The disclosed system includes an efficient and light-weight RNN based network model for predicting motion of targets in first person monocular videos. The network model includes an auto-encoder in the encoding phase and a regularizing layer in the end helps us get better accuracy. The disclosed method relies entirely just on detection bounding boxes for prediction as well as training of the network model and is still capable of transferring zero-shot on a different dataset. | 2022-03-10 |
20220076432 | NEURAL NETWORK FOR OBJECT DETECTION AND TRACKING - A method for multi-object tracking includes receiving a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors, and constructing a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images. The method also includes tracking multiple features through the sequence of images, including passing messages in a forward direction and a backward direction through the message passing graph to share information across time | 2022-03-10 |
20220076433 | Scalable Real-Time Hand Tracking - Example aspects of the present disclosure are directed to computing systems and methods for hand tracking using a machine-learned system for palm detection and key-point localization of hand landmarks. In particular, example aspects of the present disclosure are directed to a multi-model hand tracking system that performs both palm detection and hand landmark detection. Given a sequence of image frames, for example, the hand tracking system can detect one or more palms depicted in each image frame. For each palm detected within an image frame, the machine-learned system can determine a plurality of hand landmark positions of a hand associated with the palm. The system can perform key-point localization to determine precise three-dimensional coordinates for the hand landmark positions. In this manner, the machine-learned system can accurately track a hand depicted in the sequence of images using the precise three-dimensional coordinates for the hand landmark positions. | 2022-03-10 |
20220076434 | SYSTEM AND METHOD FOR QUANTIFYING NOZZLE OCCLUSION IN 3D PRINTING - One embodiment can provide a system for detecting occlusion at an orifice of a three-dimensional (3D) printer nozzle while the printer nozzle is jetting liquid droplets. During operation, the system uses one or more cameras to capture an image of the orifice of the printer nozzle while the 3D printer nozzle is jetting liquid droplets. The system performs an image-analysis operation on the captured image to identify occluded regions within the orifice of the 3D printer nozzle, compute an occlusion fraction based on the determined occluded regions, and generate an output based on the computed occlusion fraction, thereby facilitating effective maintenance of the 3D printer. | 2022-03-10 |
20220076435 | Three-Dimensional Shape Measuring Method And Three-Dimensional Shape Measuring Device - A three-dimensional shape measuring method includes: projecting a first grid pattern based on a first light and a second grid pattern based on a second light onto a target object in such a way that the first grid pattern and the second grid pattern intersect each other, the first light and the second light being lights of two colors included in three primary colors of light; picking up, by a three-color camera, an image of the first grid pattern and the second grid pattern projected on the target object, and acquiring a first picked-up image based on the first light and a second picked-up image based on the second light; and performing a phase analysis of a grid image with respect to at least one of the first picked-up image and the second picked-up image and calculating height information of the target object. | 2022-03-10 |
20220076436 | VISUAL, DEPTH AND MICRO-VIBRATION DATA EXTRACTION USING A UNIFIED IMAGING DEVICE - A unified imaging device used for detecting and classifying objects in a scene including motion and micro-vibrations by receiving a plurality of images of the scene captured by an imaging sensor of the unified imaging device comprising a light source adapted to project on the scene a predefined structured light pattern constructed of a plurality of diffused light elements, classifying object(s) present in the scene by visually analyzing the image(s), extracting depth data of the object(s) by analyzing position of diffused light element(s) reflected from the object(s), identifying micro-vibration(s) of the object(s) by analyzing a change in a speckle pattern of the reflected diffused light element(s) in at least some consecutive images and outputting the classification, the depth data and data of the one or more micro-vibrations which are derived from the analyses of images captured by the imaging sensor and are hence inherently registered in a common coordinate system. | 2022-03-10 |
20220076437 | Method for Emulating Defocus of Sharp Rendered Images - Methods and systems for defocusing a rendered computer-generated image are presented. Pixel values for a pixel array are determined from a scene description. A blur amount for each pixel is determined based on a lens function representing a lens shape and/or effect. A blur amount and blur transparency value are determined for the pixel based on the lens function and pixel depth. A convolution range comprising pixels adjacent to the pixel is determined based on the blur amount. A blend color value is determined for the pixel based on the color value of the pixel, color values of pixels in the convolution range, and the blur transparency value. The blend color value is scaled based on the blend color value and a modified pixel color value is determined from scaled blend color values. | 2022-03-10 |
20220076438 | A Method for predicting a three-dimensional (3D) representation, apparatus, system and computer program therefor - A method for predicting a three-dimensional representation in a three-dimensional scene, using one or more two-dimensional representations, obtained from at least one image generated by a camera. The method includes obtaining a forest of regression trees, and for the at least one image: extracting a 2D representation associated with one or more positions within the image; predicting a 3D representation, corresponding to the extracted 2D representation, using the regression tree forest includes a set of possible associations between at least one 2D representation and a 3D representation, each possible association resulting from a predictive model; evaluating one or more of the possible associations of the regression tree forest, according to a predetermined confidence criterion; and updating the regression tree forest including a deactivation of one or more possible associations, depending on the evaluation of the possible associations. | 2022-03-10 |
20220076439 | OPTICAL MARKERS FOR CALIBRATION/ALIGNMENT OF MEDICAL DIAGNOSTIC DEVICES - Optical sensors and optical markers are placed on components in a medical system to provide calibration and alignment, such as on a patient transportation mechanism and spatially separated medical diagnostic devices. Image processing circuitry uses the data captured by these optical devices to coordinate their movements and/or position. This enables scans that were captured in multiple medical diagnostic devices to be accurately aligned. | 2022-03-10 |
20220076440 | APPARATUS FOR ACQUIRING SURROUNDING INFORMATION OF VEHICLE AND METHOD FOR CONTROLLING THEREOF - An apparatus for acquiring surrounding information of a vehicle includes: a camera configured to acquire an entire image of at least one surrounding vehicle; and a controller configured to derive at least one of coordinates of a wheel image area or coordinates of a front-rear image area included in an entire image area, and determine distance information from the vehicle to the at least one surrounding vehicle based on a relative positional relationship between the entire image area and the at least one of the wheel image area coordinates or the front-rear image area coordinates. | 2022-03-10 |
20220076441 | OBJECT DETECTION - A computing device is programmed to generate a plurality of raw 3D point clouds from respective sensors having non-overlapping fields of view, scale each of the raw point clouds including scaling real-world dimensions of one or more features included in the respective raw 3D point cloud, determine a first transformation matrix that transforms a first coordinate system of a first scaled 3D point cloud of a first sensor to a second coordinate system of a second scaled 3D point cloud of a second sensor, and determine a second transformation matrix that transforms a third coordinate system of a third scaled 3D point cloud of a third sensor to the second coordinate system of the second scaled 3D point cloud of the second sensor. The computing device is programmed, based on the first and second transformation matrices, upon detecting an object in a first or third camera field of view, determine location coordinates of the object relative to a coordinate system that is defined based on the second coordinate system; and output the determined location coordinates of the object. | 2022-03-10 |
20220076442 | DATA PROCESSING APPARATUS, IMAGE ANALYSIS METHOD, AND RECORDING MEDIUM - A data processing apparatus includes at least one memory, and at least one processor. The at least one processor is configured to obtain, in a first coordinate system, two-dimensional data relating to a position of a predetermined part of an object in an image, calculate three-dimensional data relating to the position of the predetermined part in a second coordinate system based on the two-dimensional data relating to the position of the predetermined part in the first coordinate system, and obtain information indicating an orientation of the object based on the calculated three-dimensional data relating to the position of the predetermined part in the second coordinate system. | 2022-03-10 |
20220076443 | Use Of Image Sensors To Query Real World for Geo-Reference Information - The present disclosure provides systems and methods that makes use of one or more image sensors of a device to provide users with information relating to nearby points of interest. The image sensors may be used to detect features and/or objects in the field of view of the image sensors. Pose data, including a location and orientation of the device is then determined based on the one or more detected features and/or objects. A plurality of points of interest that are within a geographical area that is dependent on the pose data are then determined. The determination may, for instance, be made by querying a mapping database for points of interest that are known to be located within a particular distance of the location of the user. The device then provides information to the user indicating one or more of the plurality of points of interest. | 2022-03-10 |
20220076444 | METHODS AND APPARATUSES FOR OBJECT DETECTION, AND DEVICES - A method for object detection includes: obtaining a plurality of to-be-determined targets in a to-be-detected image; determining confidences of the plurality of to-be-determined targets separately belonging to at least one category, determining categories of the plurality of to-be-determined targets according to the confidences, and determining position offset values corresponding to the respective categories of the plurality of to-be-determined targets; using the position offset values corresponding to the respective categories of the plurality of to-be-determined targets as position offset values of the plurality of to-be-determined targets; and determining position information and a category of at least one to-be-determined target in the to-be-detected image according to the categories of the plurality of to-be-determined targets, the position offset values of the plurality of to-be-determined targets, and the confidences of the plurality of to-be-determined targets belonging to the categories thereof. | 2022-03-10 |
20220076445 | INFORMATION MANAGEMENT SYSTEM, AND IN-VEHICLE DEVICE, PORTABLE DEVICE, AND IMAGE MANAGEMENT SERVER USED THEREIN - The in-vehicle device transfers the selected image selected by the user from the photographed images stored in the storage device and the selected vehicle state, which is the vehicle state when the selected image is captured, to the form terminal. The portable device transmits the selected image and the selected vehicle state to the image management server only when the transmission is permitted by the user. The image management server determines whether or not the selected image is a rare image using the selected image and the selected vehicle state, and stores the selected image in the image storage device when the selected image is determined to be a rare image. | 2022-03-10 |
20220076446 | CAMERA ORIENTATION ESTIMATION - Techniques are described to estimate orientation of one or more cameras located on a vehicle. The orientation estimation technique can include obtaining an image from a camera located on a vehicle while the vehicle is being driven on a road, determining, from a terrain map, a location of a landmark located at a distance from a location of the vehicle on the road, determining, in the image, pixel locations of the landmark, selecting one pixel location from the determined pixel locations; and calculating values that describe an orientation of the camera using at least an intrinsic matrix and a previously known extrinsic matrix of the camera, where the intrinsic matrix is characterized based on at least the one pixel location and the location of the landmark. | 2022-03-10 |
20220076447 | METHOD AND APPARATUS FOR DETERMINING A FRONTAL BODY ORIENTATION - Embodiments are generally directed to methods and apparatuses for determining a frontal body orientation. An embodiment of a method for determining a three-dimensional (3D) orientation of frontal body of a player comprises: detecting each of a plurality of players in each of a plurality of frames captured by a plurality of cameras; for each of the plurality of cameras, tracking each of the plurality of players between continuous frames captured by the camera; and associating the plurality of frames captured by the plurality of cameras to generate the 3D orientation of each of the plurality of players. | 2022-03-10 |
20220076448 | METHOD AND APPARATUS FOR POSE IDENTIFICATION - Disclosed is a pose identification method including obtaining a depth image of a target, obtaining feature information of the depth image and position information corresponding to the feature information, and obtaining a pose identification result of the target based on the feature information and the position information. | 2022-03-10 |
20220076449 | SYSTEMS AND METHODS FOR CHARACTERIZING OBJECT POSE DETECTION AND MEASUREMENT SYSTEMS - A method for characterizing a pose estimation system includes: receiving, from a pose estimation system, first poses of an arrangement of objects in a first scene; receiving, from the pose estimation system, second poses of the arrangement of objects in a second scene, the second scene being a rigid transformation of the arrangement of objects of the first scene with respect to the pose estimation system; computing a coarse scene transformation between the first scene and the second scene; matching corresponding poses between the first poses and the second poses; computing a refined scene transformation between the first scene and the second scene based on coarse scene transformation, the first poses, and the second poses; transforming the first poses based on the refined scene transformation to compute transformed first poses; and computing an average rotation error and an average translation error of the pose estimation system based on differences between the transformed first poses and the second poses. | 2022-03-10 |
20220076450 | MOTION CAPTURE CALIBRATION - Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on at least three reference points, where the at least three reference points are positioned within the live action scene, and where distances between the at least three reference points are predetermined. The system further determines a location and orientation of each camera based on the reference point data. | 2022-03-10 |
20220076451 | MOTION CAPTURE CALIBRATION USING A THREE-DIMENSIONAL ASSEMBLY - Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on a plurality of reference points coupled to a plurality of extensions coupled to a base, where the plurality of reference points are in a non-linear arrangement, where distances between references points are predetermined. The system further computes reference point data generated from a performance capture system and based on the distances. The system further computes a location and orientation of each camera in the live action scene based on the reference point data. | 2022-03-10 |
20220076452 | MOTION CAPTURE CALIBRATION USING A WAND - Embodiments facilitate the calibration of cameras in a live action scene. In some embodiments, a system receives images of the live action scene from a plurality of cameras. The system further receives reference point data generated from a performance capture system, where the reference point data is based on at least three reference points, where the at least three reference points are attached to a linear form, and where distances between the at least three reference points are predetermined. The system further locates the at least three reference points in one or more images of the images. The system further computes one or more ratios of the distances between each adjacent pair of reference points of the at least three reference points in the one or more images. The system further determines a location and orientation of each camera based on the reference point data. | 2022-03-10 |
20220076453 | CALIBRATION APPARATUS AND CALIBRATION METHOD - Calibration with high accuracy can be realized even when performing the calibration while running on the actual road. Specifically, the calibration apparatus is mounted in a vehicle and includes: an image acquisition unit configured to acquire captured images obtained by a camera, which is mounted in the vehicle, capturing images of surroundings of the vehicle; a feature point extraction unit configured to extract a plurality of feature points from the captured images; a tracking unit configured to track the same feature point from a plurality of the captured images captured at different times with respect to each of the plurality of feature points, which are extracted by the feature point extraction unit, and record the tracked feature point as a feature point trajectory, a lane recognition unit configured to recognize an own vehicle's lane which is a driving lane on which the vehicle is running, from the captured images; a sorting unit configured to sort out the feature point trajectory, which is in the same plane as a plane included in the own vehicle's lane recognized by the lane recognition unit, among feature point trajectories tracked and recorded by the tracking unit; and an external parameter estimation unit configured to estimate external parameters for the camera by using the feature point trajectory sorted out by the sorting unit. | 2022-03-10 |
20220076454 | HYPERSPECTRAL TESTING - Methods and systems for determining a recommended harvest date of a | 2022-03-10 |
20220076455 | AUGMENTED REALITY HEAD MOUNTED DISPLAYS AND METHODS OF OPERATING THE SAME FOR INSTALLING AND TROUBLESHOOTING SYSTEMS - Augmented reality (AR) using head mounted displays (HMDs) and methods of operating the same for installing and troubleshooting systems are disclosed. A disclosed example HMD configured to be worn by an installer includes: a display; a communication interface; a processor; and a non-transitory computer-readable medium storing computer-readable instructions. The instructions, when executed by the processor, cause the HMD to access, via the communication interface, installation status information from a component of a system at an installation site, and present, on the display, an augmented reality environment including the installation status information and real world content. | 2022-03-10 |
20220076456 | SYSTEM AND METHOD FOR TRANSFERRING STYLE FOR RECOGNITION AREA - A system, a method, and a computer program for transferring a system for a recognition area are provided. The range of an application object including a specific style is expanded from an image to a style of a real object or a style of a specific area included in a photo. In addition, the recognition area limited to a confined photo space is expanded to a real object and a background by using a projector beam. In addition, more various styles are mixed and applied to a painting style image, which is output, or an original image. | 2022-03-10 |
20220076457 | SYNTHESIZING CLOUD STICKERS - Disclosed are systems, methods, and computer-readable storage media to modify image content. One aspect includes identifying, by one or more electronic hardware processors, an image and content within the image, determining, by the one or more electronic hardware processors, a sky region of the image, determining, by the one or more electronic hardware processors, whether the content within the image is located within the sky region of the image, and in response to the content being within the sky region of the image, modifying, by the one or more electronic hardware processors, the content based on fractal Brownian motion. | 2022-03-10 |
20220076458 | IMAGE PROCESSING APPARATUS - A processor device that is an image processing apparatus functions as an image obtaining unit that obtains an endoscopic image obtained by capturing an image of a photographic subject, a biological information calculation unit that calculates biological information concerning the photographic subject by using the endoscopic image or a display image generated by using the endoscopic image, and a texture processing unit that superimposes a plurality of textures representing the biological information on the endoscopic image or the display image and shows a boundary between adjacent textures. | 2022-03-10 |
20220076459 | IMAGE OPTIMIZATION METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM - The disclosure provides an image optimization method, system, and storage medium. The image optimization method includes extracting texture quality information from an input image. The texture quality information indicates a spatial distribution of texture quality in the input image. The image optimization method also includes performing, according to the texture quality information, texture restoration on a set region in the input image to generate a texture restored image. | 2022-03-10 |
20220076460 | SYSTEM, METHOD AND COMPUTER-ACCESSIBLE MEDIUM FOR IMAGE RECONSTRUCTION OF NON-CARTESIAN MAGNETIC RESONANCE IMAGING INFORMATION USING DEEP LEARNING - An exemplary system, method, and computer-accessible medium for generating a Cartesian equivalent image(s) of a portion(s) of a patient(s), can include, for example, receiving non-Cartesian sample information based on a magnetic resonance imaging (MRI) procedure of the portion(s) of the patient(s). and automatically generating the Cartesian equivalent image(s) from the non-Cartesian sample information using a deep learning procedure(s). The non-Cartesian sample information can be Fourier domain information. The non-Cartesian sample information can be undersampled non-Cartesian sample information. The MRI procedure can include an ultra-short echo time (UTE) pulse sequence The UTE pulse sequence can include a delay(s) and a spoiling gradient. The Cartesian equivalent image(s) can be generated by reconstructing the Cartesian equivalent image(s). The Cartesian equivalent image(s) can be reconstructed using a sampling density compensation with a tapering of over a particular percentage of a radius of a k-space, where the particular percentage can be about 50%. | 2022-03-10 |
20220076461 | SYSTEM FOR RECONSTRUCTING AN IMAGE OF AN OBJECT - The invention relates to a system for reconstructing an image of an object. The system ( | 2022-03-10 |
20220076462 | CORRECTION INSTRUCTION REGION DISPLAY APPARATUS, CORRECTION INSTRUCTION REGION DISPLAY METHOD, AND CORRECTION INSTRUCTION REGION DISPLAY PROGRAM - A first display control unit displays a first tomographic image of a three-dimensional image consisting of a plurality of tomographic images on a display unit, and superimposes and displays, as a first cross-section correction instruction region, a cross-section of a three-dimensional correction instruction region on the first tomographic image, which is used to correct a boundary of a region of interest extracted from the three-dimensional image, on the first tomographic image. A second display control unit displays at least one second tomographic image adjacent to the first tomographic image, which includes the three-dimensional correction instruction region, on the display unit, and superimposes and displays, as a second cross-section correction instruction region, a cross-section of the three-dimensional correction instruction region on the second tomographic image, on the second tomographic image. | 2022-03-10 |
20220076463 | COHESIVE MANIPULATION OF BEZIER HANDLES - The technology described herein is directed to a Bezier manipulation tool that facilitates a handle-movement paradigm for cohesive manipulation of a selected group of Bezier handles. In some implementations, the Bezier manipulation tool manipulates a selected group of Bezier handles by collectively selecting and synchronously (or concurrently) manipulating multiple handles. For example, when the Bezier manipulation tool detects a user-initiated manipulation of a reference handle of a selected group of Bezier handles, angular and radial length movements of the reference handle occurring as a result of the user-initiated manipulation are calculated relative to an anchor point associated with the reference handle. The Bezier manipulation tool cohesively manipulates other Bezier handles of the selected group of Bezier handles in accordance with the angular and radial length movements of the reference handle, e.g. {delta-theta, delta-r}, concurrent with the user-initiated manipulation of the reference handle. | 2022-03-10 |
20220076464 | UNIFIED MULTI-VIEW DATA VISUALIZATION - Systems, methods, and computer media for visualizing data are provided herein. The described examples allow multiple data visualizations generated using multiple visualization tools to be displayed in response to a single data visualization request generated using a single visualization tool. A data visualization request can specify data for inclusion in a data visualization and properties for the visualization. Features can be extracted from the request and converted to corresponding features for other visualization tools. Both the visualization tool through which the request was generated and the other visualization tools can generate data visualizations for display. | 2022-03-10 |
20220076465 | ELECTRONIC DEVICE AND METHOD FOR EDITING CONTENT OF EXTERNAL DEVICE - An electronic device according to various embodiments may include a camera circuit, a communication circuit, a display, a memory storing instructions, and a processor, configured to identify, in response to a user input, an object from content being displayed on the display, display, through the display, the object superimposed on an image being obtained through the camera circuit, wherein the image includes at least part of different content being displayed through a different electronic device, receive, while the object is being displayed, information on the different content being displayed by the different electronic device from the different electronic device through the communication circuit, determine a location of the object to be included in the different content, based on the image being obtained and the information on the different content, and transmit, in response to receiving of a specified input, information on the object and information on the location so that the object is included at the location in the different content. | 2022-03-10 |
20220076466 | PROJECTION DEVICE AND PROJECTION METHOD - A projection device and a projection method are provided. The projection device includes a picture memory, a processor, and a projection module. The picture memory stores at least one mask picture. Each of the at least one mask picture has at least one hollow portion. The processor selects a selected mask picture from the at least one mask picture, and overlays an image with the selected mask picture to generate a masked image. The masked image presents a portion of the image in an area corresponding to the at least one hollow portion. The projection module generates a projection beam corresponding to the masked image. | 2022-03-10 |
20220076467 | LOW POWER FOVEATED RENDERING TO SAVE POWER ON GPU AND/OR DISPLAY - Methods and apparatus relating to techniques for provision of low power foveated rendering to save power on GPU (Graphics Processing Unit) and/or display are described. In various embodiment, brightness/contrast, color intensity, and/or compression ratio applied to pixels in a fovea region are different than those applied in regions surrounding the fovea region. Other embodiments are also disclosed and claimed. | 2022-03-10 |
20220076468 | LANGUAGE ELEMENT VISION AUGMENTATION METHODS AND DEVICES - Near-to-eye displays support a range of applications from helping users with low vision through augmenting a real world view to displaying virtual environments. The images displayed may contain text to be read by the user. It would be beneficial to provide users with text enhancements to improve its readability and legibility, as measured through improved reading speed and/or comprehension. Such enhancements can provide benefits to both visually impaired and non-visually impaired users where legibility may be reduced by external factors as well as by visual dysfunction(s) of the user. Methodologies and system enhancements that augment text to be viewed by an individual, whatever the source of the image, are provided in order to aid the individual in poor viewing conditions and/or to overcome physiological or psychological visual defects affecting the individual or to simply improve the quality of the reading experience for the user. | 2022-03-10 |
20220076469 | INFORMATION DISPLAY DEVICE AND INFORMATION DISPLAY PROGRAM - An information display device acquires information about the current location of the device and information about the orientation of the device when the device is directed to a targeted ground object. A map information storage unit stores map information and a target ground object identification execution unit identifies, on a map, the targeted ground object by use of the current location and orientation information. A specific information acquisition unit acquires specific information relating to the target ground object; and a related information acquisition unit acquires related information relating to the targeted ground object by using a search process based on the specific information. The target ground object identification execution unit identifies, from among the ground objects on the map that are located in the direction in which the information display device is directed, the closest ground object to the current location of the information display device on the map. | 2022-03-10 |
20220076470 | METHODS AND APPARATUSES FOR GENERATING MODEL AND GENERATING 3D ANIMATION, DEVICES AND STORAGE MEDIUMS - Methods and apparatuses for generating a model and generating a 3D animation, devices, and storage mediums are provided. The method for generating a model may include: acquiring a preset sample set; acquiring pre-established generative adversarial nets, the generative adversarial nets including a generator and a discriminator; and performing training steps as follows: selecting a sample from the sample set; extracting a sample audio feature from the sample audio of the sample; inputting the sample audio feature into the generator to obtain a pseudo 3D mesh vertex sequence of the sample; inputting the pseudo 3D mesh vertex sequence and the real 3D mesh vertex sequence of the sample into the discriminator to discriminate authenticity of 3D mesh vertices; and in response to determining that the generative adversarial nets meet a training completion condition, obtaining a trained generator as a model for generating a 3D animation. | 2022-03-10 |
20220076471 | Systems and Methods for Data Bundles in Computer Animation - An animation system wherein a machine learning model is adopted to learn a transformation relationship between facial muscle movements and skin surface movements. For example, for the skin surface representing “smile,” the transformation model derives movement vectors relating to what facial muscles are activated, what are the muscle strains, what is the joint movement, and/or the like. Such derived movement vectors may be used to simulate the skin surface “smile.” | 2022-03-10 |
20220076472 | SYSTEM AND METHOD FOR GENERATING CHARACTER POSES USING DEEP LEARNING - A method of generating or modifying poses in an animation of a character are disclosed. Variable numbers and types of supplied inputs are combined into a single input. The variable numbers and types of supplied inputs correspond to one or more effector constraints for one or more joints of the character. The single input is transformed into a pose embedding. The pose embedding includes a machine-learned representation of the single input. The pose embedding is expanded into a pose representation output. The pose representation output includes local rotation data and global position data for the one or more joints of the character. | 2022-03-10 |
20220076473 | METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR GENERATING ANIMATION SEQUENCE - A method for generating an animation sequence is provided. The method includes the steps of: determining attribute information on at least one of a motion and an effect of a target object on the basis of change information on at least one of a position and a posture of a camera; and generating an animation sequence of the target object with reference to the determined attribute information. | 2022-03-10 |
20220076474 | COMPUTER GENERATED HAIR GROOM TRANSFER TOOL - A computer generated (CG) hair groom for a virtual character can include strand-based (also referred to as instanced) hair in which many thousands of digital strands represent real human hair strands. Embodiments of systems and methods for transferring CG hair groom data from a first (or source) virtual character to a second (or target) virtual character are provided. Some embodiments can factor in a difference between a hairline of the first virtual character and a hairline of the second virtual character to improve the overall appearance or fit of the hair groom on the second virtual character. | 2022-03-10 |
20220076475 | PHOTOREAL CHARACTER CONFIGURATIONS FOR SPATIAL COMPUTING - Systems and methods for displaying a virtual character in a mixed reality environment are disclosed. In some embodiments, a view of the virtual character is based on an animation rig comprising primary joints and helper joints. The animation rig may be in a pose defined by spatial relationships between the primary joints and helper joints. The virtual character may be moving in the mixed reality environment. In some instances, the virtual character may be moving based on a comparison of interestingness values associated with elements in the mixed reality environment. The spatial relationship transformation associated with the movement may be indicated by movement information. In some embodiments, the movement information is received from a neural network. | 2022-03-10 |
20220076476 | METHOD FOR GENERATING USER AVATAR, RELATED APPARATUS AND COMPUTER PROGRAM PRODUCT - A method and apparatus for generating a user avatar, an electronic device, and a computer readable storage medium are provided. The method may include: receiving incoming expression-driven information and a target avatar model being sent when a rate at which an original rendering device renders to obtain a corresponding dynamic avatar is less than a preset rate; driving the target avatar model based on the expression-driven information to generate a dynamic avatar of the user; and pushing the dynamic avatar as a substitute avatar of the user to another user. | 2022-03-10 |
20220076477 | METHOD FOR GENERATING SIMULATIONS OF THIN FILM INTERFACES FOR IMPROVED ANIMATION - A method for generating one or more visual representations of an object colliding with an interface between a simulated fluid and a material. The method includes obtaining shape and movement data of a bulk fluid and an object, identifying an interface where the bulk fluid covers a portion of the object, generating an emitted fluid at the interface, generating shape and movement data of the emitted fluid interacting with the object. | 2022-03-10 |
20220076478 | COMMUNICATION SYSTEM AND METHOD FOR PROVIDING A BIONIC VIRTUAL MEETING ROOM - The invention relates to a communication system and a method for providing a virtual meeting of a first user (U | 2022-03-10 |
20220076479 | WRITE OUT STAGE GENERATED BOUNDING VOLUMES - Systems, apparatuses and methods may provide for technology that optimizes tiled rendering for workloads in a graphics pipeline including tessellation and use of a geometry shader. More particularly, systems, apparatuses and methods may provide a way to generate, by a write out fixed-function stage, one or more bounding volumes based on geometry data, as inputs to one or more stages of the graphics pipeline. The systems, apparatuses and methods may compute multiple bounding volumes in parallel, and improve the gamer experience, and enable photorealistic renderings at full speed, (e.g., such as human skin and facial expressions) that render three-dimensional (3D) action more realistically. | 2022-03-10 |
20220076480 | CLOUD BASED DISTRIBUTED SINGLE GAME CALCULATION OF SHARED COMPUTATIONAL WORK FOR MULTIPLE CLOUD GAMING CLIENT DEVICES - Systems, apparatuses, and methods may provide for technology to process graphics data in a virtual gaming environment. The technology may identify, from graphics data in a graphics application, redundant graphics calculations relating to common frame characteristics of one or more graphical scenes to be shared between client game devices of a plurality of users and calculate, in response to the identified redundant graphics calculations, frame characteristics relating to the one or more graphical scenes. Additionally, the technology may send, over a computer network, the calculation of the frame characteristics to the client game devices. | 2022-03-10 |
20220076481 | SPATIOTEMPORAL SELF-GUIDED SHADOW DENOISING IN RAY-TRACING APPLICATIONS - In examples, a filter used to denoise shadows for a pixel(s) may be adapted based at least on variance in temporally accumulated ray-traced samples. A range of filter values for a spatiotemporal filter may be defined based on the variance and used to exclude temporal ray-traced samples that are outside of the range. Data used to compute a first moment of a distribution used to compute variance may be used to compute a second moment of the distribution. For binary signals, such as visibility, the first moment (e.g., accumulated mean) may be equivalent to a second moment (e.g., the mean squared). In further respects, spatial filtering of a pixel(s) may be skipped based on comparing the mean of variance of the pixel(s) to one or more thresholds and based on the accumulated number of values for the pixel. | 2022-03-10 |
20220076482 | RAY-TRACING FOR AUTO EXPOSURE - In various examples, a virtual light meter may be implemented along with ray tracing techniques in order to determine incident light values—e.g., incoming irradiance, incident radiance, etc.—for adjusting auto exposure values of rendered frames. For example, one or more rays may be used to sample incident light over a sampling pattern—such as a hemispherical sampling pattern—for any position in a virtual game environment. As a result, the incident light values may be sampled near a subject of interest in a scene or frame such that exposure values are consistent or stable regardless of the composition of the rendered frames. | 2022-03-10 |
20220076483 | PATH GUIDING FOR PATH-TRACED RENDERING - A computer-implemented method for generating a mask for a light source in a virtual scene includes determining a bounding box for the scene based on a frustum of a virtual camera and generating a path-traced image of the scene within the bounding box. Light paths emitted by the camera and exiting at the light source are stored, and objects poorly sampled by the light source are removed from the scene. An initial mask for the light source is generated from the density of light paths exiting at that position on the light source. The initial mask is refined by averaging in the light path density at each point on the light source for subsequent images. | 2022-03-10 |
20220076484 | Virtual-World Simulator - In one implementation, a virtual-world simulator includes a computing platform having a hardware processor and a memory storing a software code, a tracking system communicatively coupled to the computing platform, and a projection device communicatively coupled to the computing platform. The hardware processor is configured to execute the software code to obtain a map of a geometry of a real-world venue including the virtual-world simulator, to identify one or more virtual effects for display in the real-world venue, and to use the tracking system to track a moving perspective of one of a user in the real-world venue or a camera in the real-world venue. The hardware processor is further configured to execute the software code to control the projection device to simulate a virtual-world by conforming the identified one or more virtual effects to the geometry of the real-world venue from a present vantage point of the tracked moving perspective. | 2022-03-10 |
20220076485 | INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - There is provided an information processing apparatus and an information processing method that enable use of a thumbnail for 3D object still image content. A 3D object is used as original data, and role information that is information indicating that thumbnail data generated from the original data is a thumbnail based on the original data is generated. Then, the role information and encoded data obtained by encoding one frame of the 3D object by a predetermined encoding method are stored in a file having a predetermined file structure. The present technology can be applied to, for example, a data generation apparatus that generates a file that stores encoded data of Point Cloud without time information and its thumbnail. | 2022-03-10 |
20220076486 | MAP DATA FILTERING FOR SIMULATED-DRIVING-ENVIRONMENT GENERATION - According to an aspect of an embodiment, a method may include obtaining map data of a physical area. The map data may include surface data included in the physical area. The surface data may include locations of one or more objects at the surface. The method may also include identifying a first relationship between the map data and sensor information used by autonomous-vehicle software of an autonomous vehicle for navigation. The method may also include generating a three-dimensional environmental representation that represents the physical area based on the map data and the first relationship between the map data and the sensor information. The three-dimensional environmental representation may be configurable as a simulation environment with respect to testing the autonomous-vehicle software. | 2022-03-10 |
20220076487 | METHOD AND APPARATUS FOR DISPLAYING HEAT MAP, COMPUTER DEVICE, AND READABLE STORAGE MEDIUM - This application discloses a method and apparatus for displaying a heat map, a computer device, and a readable storage medium, and relates to the field of interface display. The method includes: acquiring coordinate data of a heat point position, and transmitting the coordinate data to a graphics processing unit (GPU); converting a point primitive set corresponding to the coordinate data into a patch primitive set by the GPU; shading and rendering the patch primitive set by the GPU; and displaying a heat map corresponding to the heat point position. In a process of calculating and rendering the heat map, a central processing unit (CPU) only needs to confirm coordinate data of a heat point position before transmitting point primitives corresponding to the coordinate data to the GPU, and the heat map is calculated by the GPU for rendering. Through the foregoing method, in the process of calculating and rendering the heat map, most operations are transferred to the GPU, thereby reducing calculation pressure of the CPU, and releasing computing resources of the CPU for other program logic to use. | 2022-03-10 |
20220076488 | TECHNIQUES FOR GENERATING STYLIZED QUAD-MESHES FROM TRI-MESHES - In various embodiments, a stylization subsystem automatically modifies a three-dimensional (3D) object design. In operation, the stylization subsystem generates a simplified quad mesh based on an input triangle mesh that represents the 3D object design, a preferred orientation associated with at least a portion of the input triangle mesh, and mesh complexity constraint(s). The stylization subsystem then converts the simplified quad mesh to a simplified T-spline. Subsequently, the stylization subsystem creases one or more of edges included in the simplified T-spline to generate a stylized T-spline. Notably, the stylized T-spline represents a stylized design that is more convergent with the preferred orientation(s) than the 3D object design. Advantageously, relative to prior art approaches, the stylization subsystem can more efficiently modify the 3D object design to improve overall aesthetics and manufacturability. | 2022-03-10 |
20220076489 | SYSTEMS AND METHODS FOR GENERATING SUPPLEMENTAL CONTENT IN AN EXTENDED REALITY ENVIRONMENT - Systems and methods are disclosed herein for presenting supplemental content in an extended reality (XR) environment. The system may receive fields of view of an XR environment and generate a data stream representing the fields of view, the data stream comprising, for each field of view, a data structure including at least one object identifier corresponding to a visual item appearing on the respective field of view, and field coordinates representing the position of the field of view in the XR environment. The system may compute an importance score using an occurrence number of the object identifier and a view change rate using the field coordinates. In response to the importance score exceeding an importance threshold, the system generates instructions for displaying in the XR environment supplemental content, related to the visual item. | 2022-03-10 |
20220076490 | AUGMENTED REALITY APPLICATION FOR INTERACTING WITH BUILDING MODELS - Systems and methods are disclosed for the generation and interactive display of three-dimensional (3D) building models, for the determination and simulation of risk associated with the underlying buildings, for the prediction of changes to the underlying buildings associated with the risk, for the generation and display of updated 3D building models factoring in the risk or predicted changes, and for the collection and display of relevant contextual information associated with the underlying building while presenting a 3D building model. The 3D building model can be displayed in an augmented reality (AR) view, and a user can interact with the 3D building model via controls present in the AR view and/or by moving within the real-world location in which the user is present. | 2022-03-10 |
20220076491 | FACILITATION OF AUGMENTED REALITY-BASED SPACE ASSESSMENT - A view can be presented with an augmented reality (AR) view of the space. The AR view can be augmented with imagery to indicate to the viewer environmental conditions that may not otherwise be known to the viewer. The viewer can also initiate alterations to the environment based on the information and recommendations presented in the AR view. Current conditions, past trends, and forecasted future trends can be included in the creation of the AR displays. | 2022-03-10 |