14th week of 2022 patent applcation highlights part 41 |
Patent application number | Title | Published |
20220108438 | SOMATIC MUTATION DETECTION APPARATUS AND METHOD WITH REDUCED SEQUENCING PLATFORM-SPECIFIC ERROR - A mutation detection apparatus includes a memory configured to store software for implementing a neural network and a processor configured to detect a mutation by executing the software, wherein the processor is configured to generate first genome data extracted from a target tissue and second genome data extracted from a normal tissue, extract image data by preprocessing the first genome data and the second genome data, and detect a mutation of the target tissue on the basis of the image data through the neural network trained to correct a sequencing platform-specific false positive. | 2022-04-07 |
20220108439 | IMAGE-BASED DISEASE DIAGNOSTICS USING A MOBILE DEVICE - A diagnostic system performs disease diagnostic tests using at least an optical property modifying device and a mobile device. A user provides a biological sample from a patient to the optical property modifying device that reacts with a reagent in one or more reaction chambers of the device. The user captures one or more images of the one or more reaction chambers using an optical sensor of the mobile device. The diagnostic system can determine a quality level of the images based on factors such as skew, scale, focusing, shadowing, or white-balancing. Based on an analysis of the captured image, the diagnostic system can determine a test result of a disease diagnostic test for the patient. The diagnostic system may communicate the test result, as well as instructions for the disease diagnostic test, to the user via the mobile device. | 2022-04-07 |
20220108440 | Ostomy condition classification with masking, devices and related methods - A method for classifying an ostomy condition, the method comprising: obtaining image data, the image data comprising stoma image data of a stomal area including a stoma and/or appliance image data of an adhesive surface of an ostomy appliance; determining one or more image representations based on the image data; determining one or more ostomy representations including a first ostomy parameter based on the one or more image representations; and outputting the first ostomy parameter. | 2022-04-07 |
20220108441 | PROCESS, DEVICE AND PROGRAM FOR MONITORING THE STATE OF PLANTS - It is disclosed an electronic device for monitoring the state of health of a plant. The electronic device comprises a camera, a first near infrared optical filter, a second red optical filter and a processing unit. The first filter is configured to receive and filter a first image representative of the at least one plant and to generate therefrom a first filtered image. The second filter is configured to receive and filter a second image representative of said at least one plant and to generate therefrom a second filtered image. The camera is configured to acquire the first and second filtered images of the at least one plant, generating a first and a second acquired digital image, respectively. The processing unit is configured to calculate information representative of the state of health of the plant as a function of the first and second acquired digital images. | 2022-04-07 |
20220108442 | Identifying Morphologic, Histopathologic, and Pathologic Features with a Neural Network - A system and method for use in a standardized laboratory for a specimen including a staining specific for a marker in the specimen. The method includes scanning an image, having an image magnification, of the specimen; and detecting morphologic, histopathologic and pathologic (MHP) features in the image, where the app includes a neural network (NN) trained by (a) importing into the NN, control images and associated annotations, where each of the associated annotations identifies one of the MHP features, (b) analyzing a test image with the NN to generate testing annotations for portions of the test image, (c) assessing whether the testing annotations are satisfactory, (d) enhancing the NN when the testing annotations made by the NN are unsatisfactory by repeating the importing, the analyzing and the assessing, and (e) creating the app including the NN when the testing annotations made by the NN are satisfactory. | 2022-04-07 |
20220108443 | RAPID AND AUTOMATIC VIRUS IMAGING AND ANALYSIS SYSTEM AS WELL AS METHODS THEREOF - A rapid and automatic virus imaging and analysis system includes (i) electron optical sub-systems (EOSs), each of which has a large field of view (FOV) and is capable of instant magnification switching for rapidly scanning a virus sample; (ii) sample management sub-systems (SMSs), each of which automatically loads virus samples into one of the EOSs for virus sample scanning and then unloads the virus samples from the EOS after the virus sample scanning is completed; (iii) virus detection and classification sub-systems (VDCSs), each of which automatically detects and classifies a virus based on images from the EOS virus sample scanning; and (iv) a cloud-based collaboration sub-system for analyzing the virus sample scanning images, storing images from the EOS virus sample scanning, and storing and analyzing machine data associated with the EOSs, the SMSs, and the VDCSs. | 2022-04-07 |
20220108444 | SYSTEMS AND METHODS FOR PROCESSING ELECTRONIC IMAGES TO PROVIDE LOCALIZED SEMANTIC ANALYSIS OF WHOLE SLIDE IMAGES - Systems and methods are disclosed for identifying formerly conjoined pieces of tissue in a specimen, comprising receiving one or more digital images associated with a pathology specimen, identifying a plurality of pieces of tissue by applying an instance segmentation system to the one or more digital images, the instance segmentation system having been generated by processing a plurality of training images, determining, using the instance segmentation system, a prediction of whether any of the plurality of pieces of tissue were formerly conjoined, and outputting at least one instance segmentation to a digital storage device and/or display, the instance segmentation comprising an indication of whether any of the plurality of pieces of tissue were formerly conjoined. | 2022-04-07 |
20220108445 | SYSTEMS AND METHODS FOR ACNE COUNTING, LOCALIZATION AND VISUALIZATION - Systems, methods and techniques provide for acne localization, counting and visualization. An image is processed using a trained model to identify objects. The model may be a deep learning (e.g. convolutional neural) network configured for object classification with a detection focus on small objects. The image may be a frontal or profile facial image, processed end to end. The model identifies and localizes different types of acne. Instances are counted and visualized such as by annotating the source image. An example annotation is an overlay identifying a type and location of each instance. Counts by acne type assist with scoring. A product and/or service may be recommended in response to the identification of the acne (e.g. the type, localization, counting and/or a score). | 2022-04-07 |
20220108446 | SYSTEMS AND METHODS TO PROCESS ELECTRONIC IMAGES TO PROVIDE LOCALIZED SEMANTIC ANALYSIS OF WHOLE SLIDE IMAGES - Systems and methods are disclosed for identifying formerly conjoined pieces of tissue in a specimen, comprising receiving one or more digital images associated with a pathology specimen, identifying a plurality of pieces of tissue by applying an instance segmentation system to the one or more digital images, the instance segmentation system having been generated by processing a plurality of training images, determining, using the instance segmentation system, a prediction of whether any of the plurality of pieces of tissue were formerly conjoined, and outputting at least one instance segmentation to a digital storage device and/or display, the instance segmentation comprising an indication of whether any of the plurality of pieces of tissue were formerly conjoined. | 2022-04-07 |
20220108447 | WOUND HEALING ANALYSIS AND TRACKING - This disclosure is directed towards a patient management system for analyzing images of wounds and tracking the progression of wounds over time. In some examples, a computing device of the patient management system receives an image, and determines that the image depicts a wound. The computing device inputs the image into a machine-learned model trained to classify wounds, and receives, from the machine-learned model, a classification of the wound. The computing device may then display the classification of the wound in a user interface. Additionally, the patient management system may train a machine-learned model to classify wounds. | 2022-04-07 |
20220108448 | IMAGE RECORDING APPARATUS, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM - An information processing apparatus includes a processor. The processor acquires a medical image, identifies the acquired medical image and acquires an identification result, analyzes a motion relating to interpretation of the medical image by a user to acquire a motion analysis result, and compares the identification result and the motion analysis result and acquires a comparison result relating to coincidence or noncoincidence of the identification result and the motion analysis result. | 2022-04-07 |
20220108449 | METHOD AND DEVICE FOR NEURAL NETWORK-BASED OPTICAL COHERENCE TOMOGRAPHY (OCT) IMAGE LESION DETECTION, AND MEDIUM - A method and device for neural network-based optical coherence tomography (OCT) image lesion detection, and a medium are provided. The method includes the following. An OCT image is obtained. The OCT image is inputted into a lesion-detection network model. A position, a category score, and a positive score of each lesion box in the OCT image are outputted through the lesion-detection network model. A lesion detection result of the OCT image is obtained according to the position, the category score, and the positive score of each lesion box. The lesion-detection network model includes a category detection branch configured to obtain, for each of the anchor boxes, a position and a category score of the anchor box, and a lesion positive score regression branch configured to obtain, for each of the anchor boxes, a positive score of whether the anchor box belongs to a lesion, to reflect severity of lesion positive. | 2022-04-07 |
20220108450 | SURGICAL SIMULATOR PROVIDING LABELED DATA - A surgical simulator for simulating a surgical scenario comprises a display system, a user interface, and a controller. The controller includes one or more processors coupled to memory that stores instructions that when executed cause the system to perform operations. The operations include generating simulated surgical videos, each representative of the surgical scenario. The operations further include associating simulated ground truth data from the simulation with the simulated surgical videos. The ground truth data corresponds to context information of at least one of a simulated surgical instrument, a simulated anatomical region, a simulated surgical task, or a simulated action. The operations further include annotating features of the simulated surgical videos based, at least in part, on the simulated ground truth data for training a machine learning model. | 2022-04-07 |
20220108451 | LEARNING DEVICE, METHOD, AND PROGRAM, MEDICAL IMAGE PROCESSING APPARATUS, METHOD, AND PROGRAM, AND DISCRIMINATOR - An information acquisition unit acquires a learning image including a disease region and a first teacher label that specifies the disease region included in the learning image. A teacher label generation unit generates at least one second teacher label of which a criterion for specifying the disease region is different from the first teacher label. A learning unit trains a discriminator that detects a disease region included in a target image on the basis of the learning image, the first teacher label, and the at least one second teacher label. | 2022-04-07 |
20220108452 | METHOD AND DEVICE FOR IMAGE PROCESSING, ELECTRONIC DEVICE AND STORAGE MEDIUM - A method and device for image processing, an electronic device and a storage medium are disclosed. The method includes: acquiring an image sequence to be processed; obtaining a target image sequence section by determining, in the image sequence to be processed, an image sequence section where a target image is located; and determining an image region corresponding to at least one image feature class in the target image sequence section by segmenting the target image in the target image sequence section. | 2022-04-07 |
20220108453 | SYSTEM AND METHOD FOR PERFORMING QUALITY CONTROL - Disclosed are example embodiments of methods and systems for identifying and quantifying manufacturing defects of a manufactured dental prosthesis. Certain embodiments of the system for performing quality control on manufactured dental prostheses includes: a quality control module configured to determine whether the dental prosthesis is a good or a defective product based at least on a differences model generated by comparing a design model and a scanned model of the manufactured dental prosthesis. | 2022-04-07 |
20220108454 | SEGMENTATION FOR IMAGE EFFECTS - Systems, methods, and computer-readable media are provided for foreground image segmentation. In some examples, a method can include obtaining a first image of a target and a second image of the target, the first image having a first field-of-view (FOV) and the second image having a second FOV; determining, based on the first image, a first segmentation map that identifies a first estimated foreground region in the first image; determining, based on the second image, a second segmentation map that identifies a second estimated foreground region in the second image; generating a third segmentation map based on the first segmentation map and the second segmentation map; and generating, using the second segmentation map and the third segmentation map, a refined segmentation mask that identifies the target as a foreground region of the first and/or second image. | 2022-04-07 |
20220108455 | RGBD VIDEO SEMANTIC SEGMENTATION WITH TEMPORAL AND GEOMETRIC CONSISTENCY - A method, machine readable medium and system for RGBD semantic segmentation of video data includes determining semantic segmentation data and depth segmentation data for less than all classes for images of each frame of a first video, determining semantic segmentation data and depth segmentation data for images of each key frame of a second video including a synchronous combination of respective frames of the RGB video and the depth-aware video in parallel to the determination of the semantic segmentation data and the depth segmentation data for each frame of the first video, temporally and geometrically aligning respective frames of the first video and the second video, and predicting semantic segmentation data and depth segmentation data for images of a subsequent frame of the first video based on the determination of the semantic segmentation data and depth segmentation data for images of a key frame of the second video. | 2022-04-07 |
20220108456 | Methods and Systems for Filtering Portions of an Image - A computer implemented method for filtering portions of an image comprises the following steps carried out by computer hardware components: dividing the image into a plurality of segments, each segments comprising a plurality of pixels; for each of the segments, determining at least one of an expected value, a standard deviation, and a kurtosis of the plurality of pixels of the respective segment; clustering the plurality of segments into a plurality of clusters based on the at least one of the expected value, the standard deviation, and the kurtosis of the plurality of pixels of the respective segment; for each of the clusters, determining the respective cluster as belonging to a background based on a size of the respective cluster; and determining a filtered image based on the background. | 2022-04-07 |
20220108457 | IMAGING DEVICE, ELECTRONIC DEVICE, AND IMAGING METHOD - To provide an imaging device and an electronic device capable of capturing a motion of a subject at a moment desired by a user in the motions of the subject. A CIS ( | 2022-04-07 |
20220108458 | EYE POSITION TRACKING SENSOR AND METHOD - A method of eye tracking includes: irradiating a light pattern, output from at least one collimated light source, to a cornea surface; detecting at least a part of the light pattern reflected from the cornea surface, the at least the part of the light pattern being guided by a sensor waveguide; obtaining a mapping image corresponding to the at least the part of the light pattern; and determining a direction of a gaze based on the obtained mapping image. The sensor waveguide used to determine the direction of the gaze is different from a waveguide for displaying output information. | 2022-04-07 |
20220108459 | SYSTEMS AND METHODS FOR IMAGE PROCESSING - The present disclosure provides a system and method for image reconstruction. The method may include obtaining training samples, the training samples including at least one sample first image generated based on a first tracer, at least one reference first image each of which corresponds to one of the at least one sample first image and has a higher image quality than the corresponding sample first image, at least one sample second image generated based on a second tracer different from the first tracer, and at least one reference second image each of which corresponds to one of the at least one sample second images and has a higher image quality than the corresponding sample second image; and generating a trained image processing model by training a preliminary model using the training samples. | 2022-04-07 |
20220108460 | SYSTEM AND METHOD FOR USE IN GEO-SPATIAL REGISTRATION - A method for geo-spatial registration of one of more cameras. The method comprises: providing cameras located for collecting image data of a region; providing operators carrying positioning devices, moving in selected paths in the region. The positioning devices generating position data sets of said selected paths. The operators carry selected visible markings. Marking data of the selected visible markings is used. Providing input data comprising the image data collected by the cameras and position data sets, and processing the data. The processing comprises using marking data for processing the image data and identifying appearance of markings in the image data. Generating image path data indicative of paths of the operators in the image data, and processing the path data in accordance position data set for determining correlation between path of the operators and the position data set. Using the correlation determining registration mapping for the cameras in the region. | 2022-04-07 |
20220108461 | Multi-Modal System for Visualization and Analysis of Surgical Specimens - The present disclosure provides methods, systems, and devices for coregistering imaging data to form three-dimensional superimposed images of target such as a tumor or a surgical bed. A three-dimensional map can be generated by projecting infrared radiation at a target area, receiving reflected infrared radiation, and measuring depth of the target area. A three-dimensional white light image can be created from a captured two-dimensional white light image and the three-dimensional map. A three-dimensional fluorescence image can be created from a captured two-dimensional fluorescence image and the three-dimensional map. The three-dimensional white light image and the three-dimensional fluorescence image can be aligned using one or more fiducial markers to form a three-dimensional superimposed image. The superimposed image can be used to excise cancerous tissues, for example, breast tumors. Images can be in the form of videos. | 2022-04-07 |
20220108462 | FLUORESCENCE IMAGE REGISTRATION METHOD, GENE SEQUENCING INSTRUMENT, AND STORAGE MEDIUM - The present disclosure provides a fluorescence image registration method, the method includes: acquiring a fluorescence image of a biochip; selecting a preset local region of the fluorescence image; acquiring a position of a minimum value of a sum of brightness values of pixels in a first direction and a second direction, and obtaining pixel-level registration points; dividing the pixel-level registration points into non-defective pixels and defective pixels; if the fluorescence image meets the preset standard, correcting positions of the defective pixels according to positions of the non-defective pixels; acquiring a position of a center of gravity of image points of fluorescent molecules according to a center of gravity method; fitting straight lines in the first direction and the second direction respectively according to the position of the center of gravity; and acquiring boundary points of the fluorescence image and calculating the positions of the boundary points. | 2022-04-07 |
20220108463 | MULTI-SCALE RECURRENT DECODER FOR MONOCULAR DEPTH ESTIMATION - A method for using an artificial neural network associated with an agent to estimate depth, includes receiving, at the artificial neural network, an input image captured via a sensor associated with the agent. The method also includes upsampling, at each decoding layer of a plurality of decoding layers of the artificial neural network, decoded features associated with the input image to a resolution associated with a final output of the artificial neural network. The method further includes concatenating, at each decoding layer, the upsampled decoded features with features obtained at a convolution layer associated with a respective decoding layer. The method still further includes estimating, at a recurrent module of the artificial neural network, a depth of the input image based on receiving the concatenated upsampled decoded features from each decoding layer. The method also includes controlling an action of an agent based on the depth estimate. | 2022-04-07 |
20220108464 | THREE-DIMENSIONAL MEASUREMENT APPARATUS, SYSTEM, AND PRODUCTION METHOD - A three-dimensional measurement apparatus includes an attachment portion for attaching the three-dimensional measurement apparatus to a robot, a flange for attaching an end effector, a sensor configured to receive light from an object, and a calculation unit configured to determine three-dimensional information about the object by performing a calculation using data obtained by the sensor. A shortest distance among distances from a center of the flange to an outer peripheral edge of the calculation unit is less than or equal to a radius of the attachment portion or the flange, as viewed from the flange. | 2022-04-07 |
20220108465 | DISTANCE TO OBSTACLE DETECTION IN AUTONOMOUS MACHINE APPLICATIONS - In various examples, a deep neural network (DNN) is trained—using image data alone—to accurately predict distances to objects, obstacles, and/or a detected free-space boundary. The DNN may be trained with ground truth data that is generated using sensor data representative of motion of an ego-vehicle and/or sensor data from any number of depth predicting sensors—such as, without limitation, RADAR sensors, LIDAR sensors, and/or SONAR sensors. The DNN may be trained using two or more loss functions each corresponding to a particular portion of the environment that depth is predicted for, such that—in deployment—more accurate depth estimates for objects, obstacles, and/or the detected free-space boundary are computed by the DNN. In some embodiments, a sampling algorithm may be used to sample depth values corresponding to an input resolution of the DNN from a predicted depth map of the DNN at an output resolution of the DNN. | 2022-04-07 |
20220108466 | SYSTEM AND METHOD FOR RECONSTRUCTION OF COMPRESSED SIGNAL DATA USING ARTIFICIAL NEURAL NETWORKING - Presented herein are methods and systems for training a model, specifically a machine learning model, for example, a Deep Neural Network (DNN) for signal reconstruction in an iterative process comprising a plurality of training iterations and use of the trained DNN thereof. Each of the iterations comprises receiving a record associating a compressed signal created according to a sensing matrix selected from a plurality of sensing matrixes with a respective signal originated from a signal source and used for compressing the at least one compressed signal according to the selected sensing matrix, feeding the record and the sensing matrix to train a model and outputting the trained model which may be used for reconstructing one or more new signals originated from the signal source. Wherein at least two of the plurality of sensing matrixes are fed during at least two separate iterations of the plurality of training iterations. | 2022-04-07 |
20220108467 | SYSTEM AND METHOD FOR DETECTING OBJECTS IN VIDEO IMAGES - A networked computer system for recognizing objects in video images is described herein. The networked computer system includes a user display device, a camera, and an object recognition system. The camera includes an imaging device having a 360° field-of-view and a global positioning system (GPS) device. The object recognition system includes a processor programmed to execute an algorithm including receiving live-stream video images from the camera, detecting an object of interest within the live-stream video images, determining pixel coordinates associated with a center of the detected object of interest, receiving the geographic location data from the camera, determining a geographic location of the object of interest based on the determined pixel coordinates and the geographic location of the camera, and displaying the live-stream video images on the user display device including a visual indicator of the object of interest and the determined geographic location of the object of interest. | 2022-04-07 |
20220108468 | METHOD AND SYSTEM FOR OBTAINING JOINT POSITIONS, AND METHOD AND SYSTEM FOR MOTION CAPTURE - The present invention provides a motion capture with a high accuracy which can replace an optical motion capture technology, without attaching optical markers and sensors to a subject. A subject with an articulated structure has a plurality of feature points in the body of the subject including a plurality of joints wherein a distance between adjacent feature points is obtained as a constant. A spatial distribution of a likelihood of a position of each feature point is obtained based on a single input image or a plurality of input images taken at the same time. One or a plurality of position candidates corresponding to each feature point are obtained based on the spatial distribution of the likelihood of the position of each feature point. Each join angle is obtained by performing an optimization calculation based on inverse kinematics using the candidates and the articulated structure. Positions of the feature points including the joints are obtained by performing a forward kinematics calculation using the joint angles. | 2022-04-07 |
20220108469 | RETAIL INTELLIGENT MONITORING SYSTEM AND CAMERAS THEREOF - A retail intelligent monitoring system including a merchandise monitoring camera coupled to a first fixture to capture first visual data of one or more articles of merchandise on a second fixture; a merchandise scanning camera coupled to a third fixture and positioned toward a fourth fixture to capture second visual data of one or more articles of merchandise on the fourth fixture; an activity tracking camera coupled to any one of the fixtures to capture third visual data of shoppers/customers' activities and configured to capture blurred visual data of shoppers such that identities of the shoppers remain unknown; and a network device. The retail intelligent monitoring system is configured to receive and analyze the first, second, and third visual data to determine information of the one or more articles of merchandise on the second and fourth fixture; and determine an inventory management procedure based on the analyzed first, second, and third visual data. | 2022-04-07 |
20220108470 | METHOD AND SYSTEM FOR MONOCULAR DEPTH ESTIMATION OF PERSONS - Systems and methods are provided for estimating the 3D joint location of skeleton joints from an image segment of an object and a 2D joint heatmaps comprising 2D locations of skeleton joints on the image segment. This includes applying the image segment and 2D joint heatmaps to a convolutional neural network containing at least one 3D convolutional layer block, wherein the 2D resolution is reduced at each 3D convolutional layer and the depth resolution is expanded to produce an estimated depth for each joint. Combining the 2D location of each kind of joint with the estimated depth of the kind of joint generates an estimated 3D joint position of the skeleton joint. | 2022-04-07 |
20220108471 | METHOD FOR COUPLING CO-ORDINATE SYSTEMS, AND COMPUTER-ASSISTED SYSTEM - A method for coupling a relative coordinate system of a relative automatic position finding system to an absolute coordinate system of an absolute localization system includes capturing optically identifiable identifiers of the localization system using a light sensor of a mobile apparatus of the relative automatic position finding system. The method further includes ascertaining a relative position for each of the captured optically identifiable identifiers in the relative coordinate system of the relative automatic position finding system, retrieving absolute positions of the optically identifiable identifiers in the absolute coordinate system, determining a position of the light sensor in the absolute coordinate system, and coupling, based on the determined position of the light sensor, the relative coordinate system to the absolute coordinate system. | 2022-04-07 |
20220108472 | OBJECT POSE ESTIMATION IN VISUAL DATA - The pose of an object may be estimated based on fiducial points identified in a visual representation of the object. Each fiducial point may correspond with a component of the object, and may be associated with a first location in an image of the object and a second location in a 3D coordinate pace. A 3D skeleton of the object may be determined by connecting the locations in the 3D space, and the object's pose may be determined based on the 3D skeleton. | 2022-04-07 |
20220108473 | TRAFFIC CAMERA CALIBRATION - A system, including a processor and a memory, the memory including instructions to be executed by the processor to determine camera calibration parameters including camera focal distance, camera height and camera tilt based on a two-dimensional calibration pattern located parallel to and coincident with a ground plane determined by a roadway surface and determine object data including object location, object speed and object direction for one or more objects in a field of view of the video camera. | 2022-04-07 |
20220108474 | DYNAMIC IMAGE-BASED PARKING MANAGEMENT SYSTEMS AND METHODS OF OPERATION THEREOF - The technologies regarding dynamic image-based parking systems and methods of operation thereof are disclosed. An example method for operating a camera-based parking management system includes: activating an AI-enabled camera covering a parking area responsive to detecting a parking object; capturing a snapshot of the parking area using the AI-enabled camera; transmitting the snapshot to an edge processor to identify parking object(s) shown in the snapshot; determining that the parking object shown in the snapshot is not capable of being identified with a predefined degree of certainty based on a machine learning model; responsive to the determining: identifying first one or more characteristics about the parking object that potentially reduce identification accuracy; and identifying second one or more characteristics about the snapshot (e.g., imaging angle, orientation, resolution, network speed) that potentially reduce identification accuracy; and automatically adjusting one or more settings of the AI-enabled camera to capture a second snapshot. | 2022-04-07 |
20220108475 | CAMERA CALIBRATION USING FIDUCIAL MARKERS ON SURGICAL TOOLS - Calibration of parameters of an endoscopic or laparoscopic camera is conducted while the camera is in use capturing images of a surgical procedure at a surgical site. The surgical procedure is performed using a surgical instrument marked with a fiducial pattern. A processor receives image data from images captured by the camera and uses machine vision to detect points of the pattern, and carries out an optimization to determine the 3D pose of the surgical instrument relative to the camera and the camera parameters. | 2022-04-07 |
20220108476 | METHOD AND SYSTEM FOR EXTRINSIC CAMERA CALIBRATION - A method of determining extrinsic parameters of a camera is disclosed. The method involves obtaining a digital calibration image and generating a plurality of synthetic views of the calibration image, each synthetic view having a set of virtual camera parameters. The method also includes identifying a set of features from each of the plurality of synthetic views, obtaining a digital camera image of a representation of the digital calibration image and identifying the set of features in the digital camera image. The method includes comparing each feature in the set of features of the digital camera image with each feature in each set of features generated for the plurality of synthetic views. Extrinsic parameters of the camera can then be calculated using the virtual camera parameters of the synthetic views associated with the best matches. | 2022-04-07 |
20220108477 | AUTOMATED ICON ACCESSIBILITY ASSESSMENT ALGORITHM AND TOOL - Systems, methods, and computer-readable media for providing tools to validate color contrast are provided. To do so, three discrete color check processes are performed to ensure a user is able to identify when an icon is at risk of being inaccessible by some users. A border score considers each pixel at the edge of an icon compared to the background in which it is placed. An area score considers each discrete pixel of an icon compared to the background in which it is placed. A grid score considers a subdivision of an icon compared to the background in which it is placed. Using each of these three independent process, a summative score is provided. The summative score categories the icon into a risk level. Depending on the risk level, the icon may need to be refined to ensure it becomes more accessible. | 2022-04-07 |
20220108478 | PROCESSING IMAGES USING SELF-ATTENTION BASED NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing images using self-attention based neural networks. One of the methods includes obtaining one or more images comprising a plurality of pixels; determining, for each image of the one or more images, a plurality of image patches of the image, wherein each image patch comprises a different subset of the pixels of the image; processing, for each image of the one or more images, the corresponding plurality of image patches to generate an input sequence comprising a respective input element at each of a plurality of input positions, wherein a plurality of the input elements correspond to respective different image patches; and processing the input sequences using a neural network to generate a network output that characterizes the one or more images, wherein the neural network comprises one or more self-attention neural network layers. | 2022-04-07 |
20220108479 | CODING OF COMPONENT OF COLOR ATTRIBUTES IN GEOMETRY-BASED POINT CLOUD COMPRESSION (G-PCC) - A device for decoding encoded point cloud data can be configured to: for a point of a point cloud, determine a first attribute value for a first color component based on a first predicted value and a first residual value; apply a scaling factor to the first residual value to determine a predicted second residual value, wherein the scaling factor has one or both of a non-integer value or an absolute value greater than one; for the point of the point cloud, receive a second residual value in the encoded point cloud data; determine a final second residual value based on the predicted second residual value and the received second residual value; and for the point of the point cloud, determine a second attribute value for a second color component based on a second predicted value and the final second residual value. | 2022-04-07 |
20220108480 | BIT PLANE DECODING METHOD AND APPARATUS - A bit plane decoding method can employ a “prediction plus reconstruction” process in bit plane decoding. By first-stage or multi-stage prediction, the position of at least some unwanted decoding may be omitted in the reconstruction process. The method can include obtaining a code block to be decoded, the code block comprising a plurality of stripes, each said stripe including a plurality of pixel positions to be decoded; performing an L-stage prediction on the plurality of pixel positions included in each said stripe to divide the plurality of pixel positions in each said stripe into a corresponding decoding channel, the decoding channel comprising an s-channel, an m-channel, or a c-channel, wherein L is an integer greater than or equal to 1; and decoding the pixel positions of each decoding channel to obtain wavelet coefficients for each pixel location. | 2022-04-07 |
20220108481 | METHOD FOR COMPRESSING POINT CLOUDS - The present invention refers to removal of redundant information from plenoptic point cloud data, reducing the number of bits needed to represent them and thus making the plenoptic point cloud data more suitable to be transferred through a medium of limited bandwidth. The proposed solution is based on predictive differential coding, using the standard color channel of a point clouds as a reference for plenoptic data, and on the application of transforms for greater data compression. | 2022-04-07 |
20220108482 | DENSE MESH COMPRESSION - A method of compressing meshes using a projection-based approach, leveraging and expanding the tools and syntax generated for projection-based volumetric content compression is described. The mesh is segmented into surface patches, with the difference that the segments follow the connectivity of the mesh. The dense mesh compression utilizes 3D surface patches to represent connected triangles on a mesh surface and groups of vertices to represent triangles not captured by surface projection. Each surface patch (or 3D patch) is projected to a 2D patch, whereby for the mesh, the triangle surface sampling is similar to a common rasterization approach. For each patch, position and connectivity of the projected vertices are kept. The sampled surface resembles a point cloud and is coded with the same approach used for point cloud compression. The list of vertices and connectivity per patch is encoded, and the data is sent with the coded point cloud data. | 2022-04-07 |
20220108483 | VIDEO BASED MESH COMPRESSION - A method of compression of 3D mesh data using projections of mesh surface data and video representation of connectivity data is described herein. The method utilizes 3D surface patches to represent a set of connected triangles on a mesh surface. The projected surface data is stored in patches (a mesh patch) that is encoded in atlas data. The connectivity of the mesh, that is, the vertices and the triangles of the surface patch, are encoded using video-based compression techniques. The data is encapsulated in a new video component named vertex video data, and the disclosed structure allows for progressive mesh coding by separating sets of vertices in layers, and creating levels of detail for the mesh connectivity. This approach extends the functionality of the V3C (volumetric video-based) standard, currently being used for coding of point cloud and multiview plus depth content. | 2022-04-07 |
20220108484 | PREDICTIVE GEOMETRY CODING IN G-PCC - An example method of encoding a point cloud includes obtaining a value of a secondary residual for geometry coding a current predictive tree node of the point cloud; and encoding the value of the secondary residual, wherein encoding the value comprises: encoding, using a first set of context-adaptive binary arithmetic coding (CABAC) contexts, prefix bins of a syntax element having a value that specifies an absolute value of the value of the secondary residual minus 2; and encoding, using a second set of CABAC contexts that is different than the first set of contexts, suffix bins of the syntax element. | 2022-04-07 |
20220108485 | CLIPPING LASER INDICES IN PREDICTIVE GEOMETRY CODING FOR POINT CLOUD COMPRESSION - A method of encoding a point cloud includes determining, by one or more processors, a quantity of lasers used to capture light detection and ranging (LIDAR) data that represents the point cloud; and encoding, by the one or more processors, a laser index for a current node of the point cloud, wherein encoding the laser index comprises: obtaining a predicted laser index value of the current node; determining a residual laser index value for the current node, wherein determining the residual laser index value comprises constraining a sum of the residual laser index value and the predicted laser index value to be less than or equal to the determined quantity of lasers minus one; and encoding, in a bitstream, one or more syntax elements that represent the residual laser index value. | 2022-04-07 |
20220108486 | SCALING OF QUANTIZATION PARAMETER VALUES IN GEOMETRY-BASED POINT CLOUD COMPRESSION (G-PCC) - A G-PCC coder is configured to receive the point cloud data, determine a final quantization parameter (QP) value for the point cloud data as a function of a node QP offset multiplied by a geometry QP multiplier, and code the point cloud data using the final QP value to create an coded point cloud. | 2022-04-07 |
20220108487 | MOTION ESTIMATION IN GEOMETRY POINT CLOUD COMPRESSION - A device for encoding point cloud data, the device comprising: a memory to store the point cloud data; and one or more processors coupled to the memory and implemented in circuitry, the one or more processors configured to identify a first set of global motion parameters from global positioning system information. The one or more processors are further configured to determine, based on the first set of global motion parameters, a second set of global motion parameters to be used for global motion estimation for a current frame and apply, based on the second set of global motion parameters, motion compensation to a reference frame to generate a global motion compensated frame for the current frame. | 2022-04-07 |
20220108488 | ANGULAR MODE AND IN-TREE QUANTIZATION IN GEOMETRY POINT CLOUD COMPRESSION - A device for decoding a bitstream that includes point cloud data can be configured to determine, based on syntax signaled in the bitstream, that in-tree quantization is enabled for a node; determine, for the node based on the syntax signaled in the bitstream, that an angular mode is activated for the node; in response to in-tree quantization being enabled for the node, determine for the node a quantized value representing a coordinate position relative to an origin position; scale the quantized value without clipping to determine a scaled value representing the coordinate position relative to the origin position; and determine a context for context decoding a plane position syntax element for the angular mode based on the scaled value representing the coordinate position relative to the origin position. | 2022-04-07 |
20220108489 | THREE-DIMENSIONAL DATA ENCODING METHOD, THREE-DIMENSIONAL DATA DECODING METHOD, THREE-DIMENSIONAL DATA ENCODING DEVICE, AND THREE-DIMENSIONAL DATA DECODING DEVICE - A three-dimensional data encoding method includes: calculating coefficient values from items of attribute information of three-dimensional points; quantizing the coefficient values to generate quantization values; and generating a bitstream including the quantization values. One or more items of attribute information are classified, for each of three-dimensional spaces, into one of groups, the three-dimensional spaces (i) being included in a plurality of three-dimensional spaces, and (ii) including, among the three-dimensional points, three-dimensional points including the one or more items of attribute information. In the quantizing, the coefficient values are quantized using a predetermined quantization parameter or one or more quantization parameters for one or more groups, the one or more groups being included in the groups and including one or more items of attribute information used to calculate the coefficient values, the one or more items of attribute information being included in the items of attribute information. | 2022-04-07 |
20220108490 | METHOD, APPARATUS, SYSTEM AND COMPUTER-READABLE RECORDING MEDIUM FOR FEATURE INFORMATION - There are provided a method, apparatus, system, and computer-readable recording medium for image compression. An encoding apparatus performs domain transformation and quantization on feature map information and image information. The encoding apparatus rearranges the result of domain transformation and quantization so as to have a form advantageous to the encoding procedure and encodes the result of rearrangement, thereby generating a bitstream. A decoding apparatus receives the bitstream, decodes the received bitstream, and performs inverse transformation, dequantization, and inverse rearrangement using information transmitted through the bitstream. The result of inverse transformation, dequantization, and inverse rearrangement is used for the machine-learning task of a neural network. | 2022-04-07 |
20220108491 | PREDICTIVE GEOMETRY CODING IN G-PCC - An example method of decoding a point cloud includes selecting, from a plurality of predefined prediction modes, a prediction mode for performing predictive geometry coding of a position of a current node of the point cloud, wherein the plurality of prediction modes includes at least: a zero prediction mode, and a delta prediction mode; responsive to selecting the zero prediction mode: determining a radius, an azimuth, and a laser index of a parent node of the current node; inferring an azimuth and a laser index of a predicted position of the current node as the azimuth and the laser index of the parent node; inferring a radius of the predicted position to be a minimum radius value, wherein the minimum radius value is different than the radius of the parent node; and determining, based on the predicted position of the current node, the position of the current node. | 2022-04-07 |
20220108492 | GPCC PLANAR MODE AND BUFFER SIMPLIFICATION - A method of encoding point cloud data comprises storing, in a buffer, a maximum coordinate of a pair of coordinates of an applicable node, wherein the applicable node is a most-recently encoded node with a same position as a current node along an applicable axis and the pair of coordinates are for axes different from the applicable axis; determining a context for a planar mode plane position of the current node, wherein determining the context for the planar mode plane position comprises determining, based on the maximum coordinate of the pair of coordinates of the applicable node, a distance value representing a distance between the current node and the applicable node; determining an increment value that indicates whether the distance value is greater than a threshold; and determining the context index based on the increment value; and entropy encoding the planar mode plane position using the determined context. | 2022-04-07 |
20220108493 | ENCODING/DECODING METHOD AND DEVICE FOR THREE-DIMENSIONAL DATA POINTS - An encoding method includes encoding first M layers of a multi-tree using a breadth-first mode, and switching to a depth-first mode to encode at least one node in the M-th layer of the multi-tree. The multi-tree is obtained by dividing a plurality of three-dimensional data points using a multi-tree division method. M is an integer larger than or equal to 2. Sub-nodes of each of the at least one node are encoded using the breadth-first mode, and the sub-nodes of one of the at least one node include all sub-nodes obtained by performing at least one multi-tree division on the one of the at least one node until a leaf sub-node is obtained. | 2022-04-07 |
20220108494 | METHOD, DEVICE, AND STORAGE MEDIUM FOR POINT CLOUD PROCESSING AND DECODING - A processing method includes encoding or decoding an N-th layer of a multi-tree using a breadth-first mode. The multi-tree is used for position division on a point cloud. The method further includes, in response to a number or a distribution of all point cloud points in a target node of the N-th layer meeting a preset condition, encoding or decoding the point cloud points in the target node using a depth-first mode to obtain a code stream of the target node. The code stream of the target node includes an identifier and indexes of nodes where the point cloud points of the target node are located at various layers of the multi-tree. The identifier indicates to switch from the breadth-first mode to the depth-first mode to encode or decode sub-nodes of the target node. N is an integer greater than or equal to 1. | 2022-04-07 |
20220108495 | SYSTEM FOR IMMERSIVE DEEP LEARNING IN A VIRTUAL REALITY ENVIRONMENT - Systems, computer program products, and methods are described herein for immersive deep learning in a virtual reality environment. The present invention is configured to electronically receive, via the extended reality platform, an image of a financial resource; electronically receive, via the extended reality platform, a first user input selecting a machine learning model type; electronically receive, via the extended reality platform, a second user input selecting one or more interaction options; initiate a machine learning model on the image; extract, using the machine learning model, one or more features associated with the image; generate, using the saliency map generator, a saliency map for the image by superimposing the one or more features on the image; and transmit control signals configured to cause the computing device associated with the user to display, via the extended reality platform, the saliency map associated with the image | 2022-04-07 |
20220108496 | INFORMATION PROCESSING DEVICE AND COMPUTER READABLE MEDIUM - An information processing device includes: a processor configured to generate an body object in a virtual space corresponding to a body in a real space, associate an associated object with at least a part of the body object, the associated object being displayed in the virtual space in association with the body, and move, when movement of the body object in the virtual space is detected, the associated object while maintaining a relative positional relationship between the associated object and the body object. | 2022-04-07 |
20220108497 | LIGHT-RESAMPLING WITH SURFACE SIMILARITY TEST - Devices, systems, and techniques to incorporate lighting effects into computer-generated graphics. In at least one embodiment, a graphical frame depicting a virtual scene comprising is rendered by generating a record indicative of one or more lights in the virtual scene, and using the record to render a pixel. A second record, indicative of other lights in the virtual scene, is selected to combine with the first record, based at least in part on similarity between surfaces associated with the respective records. The combined record is used to render a pixel in a second graphical frame. | 2022-04-07 |
20220108498 | GENERATING PROCEDURAL TEXTURES WITH THE AID OF PARTICLES - System and Method for generating textures on an object on the basis of the particles emitted by a particles engine, including: an access to data of a particles emitter, of particles emitted, of target object, of traces, and of graphical effects; an animation simulation module provided so as to perform a simulation of emission and of displacement for each of the particles provided; a tracer module provided for generating a trace on the surface of a target object corresponding to the displacement of a particle along said surface after an impact of the particle against the target object with the aid of the traces data and of the target object data; and a physical parameters integrator module provided for generating a new set of textures for said object taking into account the date of the object, the data of each new or modified trace, and the data of the corresponding graphical effects. | 2022-04-07 |
20220108499 | SYSTEM AND METHOD FOR COMPUTED TOMOGRAPHY - The present disclosure provides a system and method for CT image reconstruction. The method may include combining an analytic image reconstruction technique with an iterative reconstruction algorithm of CT images. The image reconstruction may be performed on or near a region of interest. | 2022-04-07 |
20220108500 | ADAPTABLE DRAWING GUIDES - Embodiments of the present invention provide systems, methods, and computer storage media directed to adaptable drawing guides. In implementations, a guide mode is identified. Generally, a guide mode indicates a manner in which to use a drawing guide to confine strokes corresponding with input paths. Upon detecting an input path, a stroke is drawn in accordance with the guide mode. For example, when an edge mode is employed, the drawn stroke is confined to align with at least one edge of the drawing guide, when an inside mode is employed, the drawn stroke is confined inside of the set of edges of the drawing guide, and when the outside mode is employed, the drawn stroke is confined outside of the set of edges of the drawing guide. | 2022-04-07 |
20220108501 | INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, RECORDING MEDIUM, METHOD OF MANUFACTURING PRODUCTS, METHOD OF ACQUIRING LEARNING DATA, DISPLAY METHOD, AND DISPLAY APPARATUS - A display apparatus includes a processing portion. The display apparatus is configured to display physical quantity related to a state of a machine apparatus. The processing portion is configured to display an image in which a plurality of pieces of partial time-series data extracted from time-series data related to the physical quantity are arranged in a state where time information is provided in the image, the time information being related to time in which the plurality of pieces of partial time-series data has been acquired. | 2022-04-07 |
20220108502 | METHOD FOR GENERATING AN IMAGE DATA SET FOR REPRODUCTION BY MEANS OF AN INFOTAINMENT SYSTEM OF A MOTOR VEHICLE - The invention relates to a method for generating an image data set (BDS) for reproduction by means of an infotainment system ( | 2022-04-07 |
20220108503 | INTEGRATED MEDICAMENT DELIVERY DEVICE FOR USE WITH CONTINUOUS ANALYTE SENSOR - An integrated system for the monitoring and treating diabetes is provided, including an integrated receiver/hand-held medicament injection pen, including electronics, for use with a continuous glucose sensor. In some embodiments, the receiver is configured to receive continuous glucose sensor data, to calculate a medicament therapy (e.g., via the integrated system electronics) and to automatically set a bolus dose of the integrated hand-held medicament injection pen, whereby the user can manually inject the bolus dose of medicament into the host. In some embodiments, the integrated receiver and hand-held medicament injection pen are integrally formed, while in other embodiments they are detachably connected and communicated via mutually engaging electrical contacts and/or via wireless communication. | 2022-04-07 |
20220108504 | SYSTEMS AND METHODS FOR GENERATING FLOOD HAZARD ESTIMATION USING MACHINE LEARNING MODEL AND SATELLITE DATA - A system and method for flood hazard estimation inputs a satellite elevation map and applies a machine learning model to output a geographic map representing flood hazard areas. The machine learning model is trained using a generative adversarial network (GAN) to produce an output of a deterministic hazard mapping algorithm. A GAN objective applies a loss function, reweighted to increase the importance of high hazard areas. The method retrieves a DEM topography file representing elevation data of an identified terrain, and applies a sink-filling algorithm to detect and fill sinks in the DEM topography. The algorithm subtracts the DEM elevation data to generate a filled topography, and identifies flattest regions of the filled topography. The algorithm then generates a flood hazard map by merging the filled topography and the DEM elevation data, using a weighting function that balances the detected sinks and the flattest regions of the filled topography. | 2022-04-07 |
20220108505 | GENERATING DIGITAL IMAGE EDITING GUIDES BY DETERMINING AND FILTERING RASTER IMAGE CONTENT BOUNDARIES - The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating visual image editing guides for digital raster images by identifying and filtering edge paths. In particular, in one or more embodiments, the disclosed systems utilize denoising and adaptive thresholding with a digital image to generate a simplified, binary digital image. Further, in some embodiments, the disclosed systems utilize contour detection to identify a set of edge paths from the raster image for the simplified, binary digital image. Additionally, in one or more embodiments, the disclosed systems filter the set of edge paths based on edge length and utilizes the filtered set of edge paths to generate visual image editing guides for generating modified digital images. | 2022-04-07 |
20220108506 | CONTENT-ADAPTIVE TUTORIALS FOR GRAPHICS EDITING TOOLS IN A GRAPHICS EDITING SYSTEM - Methods, systems, and computer storage media for providing tool tutorials based on tutorial information that is dynamically integrated into tool tutorial shells using graphics editing system operations in a graphics editing systems. In operation, an image is received in association with a graphics editing application. Tool parameters (e.g., image-specific tool parameters) are generated based on processing the image. The tool parameters are generated for a graphics editing tool of the graphics editing application. The graphics editing tool (e.g., object removal tool or spot healing tool) can be a premium version of a simplified version of the graphics editing tool in a freemium application service. Based on the tool parameters and the image, a tool tutorial data file is generated by incorporating the tool parameters and the image into a tool tutorial shell. The tool tutorial data file can be selectively rendered in an integrated interface of the graphics editing application. | 2022-04-07 |
20220108507 | Map Data Visualizations with Multiple Superimposed Marks Layers - A method generates map visualizations with multiple map layers. A user selects a data source with geographic data. A device displays a data visualization user interface, including a schema information region with data fields, and shelf regions that defining characteristics for a data visualization. The user selects a first geographic data, and the user interface generates a map data visualization using coordinates associated with the first geographic data field. The visualization includes a first plurality of data marks in a first layer. The user selects a second geographic data field. In response, the user interface displays a new layer icon. Upon activation of the new layer icon by the second geographic data field, the user interface superimposes a second layer over the existing map data visualization to form an updated map data visualization. The second layer includes a second plurality of data marks corresponding to the second geographic data field. | 2022-04-07 |
20220108508 | DETECTING PHYSICAL BOUNDARIES - Techniques for alerting a user, who is immersed in a virtual reality environment, to physical obstacles in their physical environment are disclosed. | 2022-04-07 |
20220108509 | Depicting Humans in Text-Defined Outfits - Generating images and videos depicting a human subject wearing textually defined attire is described. An image generation system receives a two-dimensional reference image depicting a person and a textual description describing target clothing in which the person is to be depicted as wearing. To maintain a personal identity of the person, the image generation system implements a generative model, trained using both discriminator loss and perceptual quality loss, which is configured to generate images from text. In some implementations, the image generation system is configured to train the generative model to output visually realistic images depicting the human subject in the target clothing. The image generation system is further configured to apply the trained generative model to process individual frames of a reference video depicting a person and output frames depicting the person wearing textually described target clothing. | 2022-04-07 |
20220108510 | REAL-TIME GENERATION OF SPEECH ANIMATION - To realistically animate a String (such as a sentence) a hierarchical search algorithm is provided to search for stored examples (Animation Snippets) of sub-strings of the String, in decreasing order of sub-string length, and concatenate retrieved sub-strings to complete the String of speech animation. In one embodiment, real-time generation of speech animation uses model visemes to predict the animation sequences at onsets of visemes and a look-up table based (data-driven) algorithm to predict the dynamics at transitions of visemes. Specifically posed Model Visemes may be blended with speech animation generated using another method at corresponding time points in the animation when the visemes are to be expressed. An Output Weighting Function is used to map Speech input and Expression input into Muscle-Based Descriptor weightings | 2022-04-07 |
20220108511 | Method and User Interface for Generating Tangent Vector Fields Usable for Generating Computer Generated Imagery - A representation of a surface of one or more objects is positioned in a virtual space is obtained in a computer animation system. Thereafter, a guide curve specification of a guide curve in the virtual space relative to the surface is received. Thereafter, the computer animation system computes a first set of tangent vector values for differentiable locations along the guide curve and computes a second set of tangent vector values for nondifferentiable locations along the guide curve. Using the first set and second set, the computer animation system computes a third set of tangent vector values for locations on the surface other than locations along the guide curve and computes a tangent vector field over the surface from at least the first set of tangent vector values, the second set of tangent vector values, and the third set of tangent vector values. | 2022-04-07 |
20220108512 | Method for Operating a Character Rig in an Image-Generation System Using Constraints on Reference Nodes - A character rig may be representable as a data structure specifying a plurality of articulated character parts, an element tree specifying relations between character parts, and a set of constraints on the character parts. After receiving rotoscoping movement input data corresponding to attempted alignments of movements of at least some of the character parts with elements moving in a captured live action scene, a rotoscoping constraints may be received. The rotoscoping constraint may include at least a first constraint on the character rig other than a second constraint specified by the data structure of the character rig, Thereafter, rig movement inputs for a second set of character parts distinct from the first set of character parts may be accepted and the character rig may be moved according to the rig movement inputs while constrained by the rotoscoping constraints. | 2022-04-07 |
20220108513 | COMPUTER ANIMATION METHODS AND SYSTEMS - According to at least one embodiment, there is provided a computer animation method comprising: causing at least one visual display to display at least one virtual three-dimensional user interface to at least one user; and receiving at least one user input signal from at least one sensor of three-dimensional movement in response to, at least, movement of the at least one sensor by the at least one user, wherein the at least one user input signal represents interaction by the at least one user with the at least one virtual three-dimensional user interface at least to define at least one animation parameter. Computer-readable media and systems are also disclosed. | 2022-04-07 |
20220108514 | ANIMATION OF AVATAR FACIAL GESTURES - Example systems, methods, and instructions to be executed by a processor for the animation of realistic facial performances of avatars are provided. Such an example system includes a memory to store a facial gesture model of a subject head derived from a photogrammetric scan of the subject head, and a video of a face of the subject head delivering a facial performance. The system further includes a processor to generate a dynamic texture map that combines the video of the face of the subject head delivering the facial performance with a static portion of the facial gesture model of the subject head, apply the dynamic texture map to the facial gesture model, and animate the facial gesture model of the subject head to emulate the facial performance. | 2022-04-07 |
20220108515 | Computer Graphics System User Interface for Obtaining Artist Inputs for Objects Specified in Frame Space and Objects Specified in Scene Space - In an image processing system, artist user interface provides for user input of specifications for an inserted object, specified in frame space. The inserted objects can be specified in frame space but can be aligned with object points in a virtual scene space. For other frames, where the object points move in the frame space, the object movements are applied to the inserted object in the frame space. The alignment can be manual by the user or programmatically determined. | 2022-04-07 |
20220108516 | Method for Editing Computer-Generated Images to Maintain Alignment Between Objects Specified in Frame Space and Objects Specified in Scene Space - In an image processing system, an image insertion is to be included onto, or relative to, a first and second frame, each depicting images of a set of objects of a geometric model. A point association is determined for a depicted object that is depicted in both the first frame and the second frame, representing reference coordinates in a virtual scene space of a first location on the depicted object independent of at least one position change and a mapping of a first image location in the first image to where the first location appears in the first image. A corresponding location in the second image is determined based on where the first location on the depicted object appears according to the reference coordinate in the virtual scene space and a second image location on the second image where the first location appears in the second image. | 2022-04-07 |
20220108517 | METHOD AND APPARATUS FOR GENERATING THREE-DIMENSIONAL MODEL, COMPUTER DEVICE AND STORAGE MEDIUM - Provided are a method and apparatus for generating a three-dimensional model. The method includes following. A first image containing a first face is acquired. First point cloud data including contour information of the first face is determined based on the first image. First albedo information of the first face and second point cloud data including detail information of the first face are determined based on the first point cloud data and the first image. A three-dimensional model of the first face is generated based on the first albedo information and the second point cloud data. | 2022-04-07 |
20220108518 | EARLY TERMINATION IN BOTTOM-UP ACCELERATION DATA STRUCTURE REFIT - Apparatus and method for bottom-up BVH refit. For example, one embodiment of an apparatus comprises: a hierarchical acceleration data structure generator to construct an acceleration data structure comprising a plurality of hierarchically arranged nodes; traversal hardware logic to traverse one or more rays through the acceleration data structure; intersection hardware logic to determine intersections between the one or more rays and one or more primitives within the hierarchical acceleration data structure; a node unit comprising circuitry and/or logic to perform refit operations on nodes of the hierarchical acceleration data structure, the refit operations to adjust spatial dimensions of one or more of the nodes; and an early termination evaluator to determine whether to proceed with refit operations or to terminate refit operations for a current node based on refit data associated with one or more child nodes of the current node. | 2022-04-07 |
20220108519 | METHODS AND SYSTEMS FOR IMPLEMENTING SCENE DESCRIPTIONS USING DERIVED VISUAL TRACKS - The techniques described herein relate to methods, apparatus, and computer readable media configured to generate media data for an immersive media experience. A set of parameters for processing a scene description for an immersive media experience are accessed. Multimedia data for the immersive media experience is accessed, including a plurality of media tracks, each media track comprising an associated series of samples of media data for a different component of the immersive media experience, and a derived track comprising a set of derivation operations to perform to generate a series of samples of media data for the client for the immersive media experience. A derivation operation is performed to generate a portion of media data for a derived media track, including processing the plurality of media tracks to generate a first series of samples of the media data for the immersive media experience, and outputting the derived media track. | 2022-04-07 |
20220108520 | METHOD FOR GENERATING A CUSTOM HAND BRACE FOR A PATIENT - A computer-based method for generating a custom hand brace for a patient includes compiling optical data captured during a three-dimensional scan of a target hand of the patient into a three-dimensional hand model of the target hand; and receiving a diagnosis for an injury to the target hand of the patient. Based on the diagnosis, the method includes generating a custom hand brace model by extracting a first set of points from the three-dimensional hand model to generate an initial hand brace model; forming an interior surface of the initial hand brace model based on the first set of points; deforming the interior surface of the initial hand brace model into alignment with an exterior surface of the three-dimensional hand model to generate the custom hand brace model; and queuing the custom hand brace model for fabricating at an advanced manufacturing system. | 2022-04-07 |
20220108521 | SYSTEM AND METHOD FOR GENERATING LINE-OF-SIGHT INFORMATION USING IMAGERY - A system, method and computer program product are provided for generating line-of-sight information using imagery. In some aspects, the method includes receiving a raster image depicting at least one object in a region of interest, and measuring a shadow of the at least one object in the raster image. The method also includes determining an angle of the sun from the shadow of the at least one object, and creating a plurality of virtual shadows for the at least one object using the angle of the sun. The method further includes generating line-of-sight information for the at least one object based on an intersection of the plurality of virtual shadows with objects in the region of interest. | 2022-04-07 |
20220108522 | GENERATING THREE-DIMENSIONAL GEO-REGISTERED MAPS FROM IMAGE DATA - A plurality of images is obtained, whether as separate images or part of a video. The plurality of images is used to generate a three-dimensional (3D) model of the imagery. The 3D model is registered to a geographic coordinate system as a first registered 3D model. The first registered 3D model is merged with a second registered 3D model to generate a merged 3D model. A request including a value corresponding to a location within the geographic coordinate system that includes at least a portion of the merged 3D model is received from a client device. A message identifying at least a subset of points in the portion of the merged 3D model is sent to the client device, each point in the subset having a three-dimensional coordinate. | 2022-04-07 |
20220108523 | DISPLAY METHOD AND DISPLAY DEVICE FOR PROVIDING SURROUNDING INFORMATION BASED ON DRIVING CONDITION - A display method is a display method performed by a display device that operates in conjunction with a mobile object, and includes: determining which one of first surrounding information, which is video showing a surrounding condition of the mobile object and is generated using two-dimensional information, and second surrounding information, which is video showing the surrounding condition of the mobile object and is generated using three-dimensional data, is to be displayed, based on a driving condition of the mobile object; and displaying the one of the first surrounding information and the second surrounding information that is determined to be displayed. | 2022-04-07 |
20220108524 | BOX MODELING METHOD, APPARATUS, ROBOT PICKING SYSTEM, ELECTRONIC DEVICE AND MEDIUM - A box modeling method and apparatus, a robot picking system, an electronic device and a medium, the method comprising: acquiring first 3D point cloud information of a box(S | 2022-04-07 |
20220108525 | PATIENT-SPECIFIC CORTICAL SURFACE TESSELLATION INTO DIPOLE PATCHES | 2022-04-07 |
20220108526 | FOUR-DIMENSIONAL IMAGING SYSTEM FOR CARDIOVASCULAR DYNAMICS - A system may receive imaging data generated by an imaging device directed at a heart. The system may receive a first input operation indicative of a selected time-frame. The system may display images of the heart based on the intensity values mapped to the selected time-frame. The system may receive, based on interaction with the images, an apex coordinate and a base coordinate. The system may calculate, based on the apex coordinate and the base coordinate, a truncated ellipsoid representative an endocardial or epicardial boundary of the heart. The system may generate a four-dimensional mesh comprising three-dimensional vertices spaced along the mesh. The system may overlay, on the displayed images, markers representative of the vertices. The system may receive a second input operation corresponding to a selected marker. The system may enhance the mesh by adjusting or interpolating vertices across multiple time-frames. | 2022-04-07 |
20220108527 | METHOD FOR GENERATING HIGH-QUALITY REAL-TIME ISOSURFACE MESH - Provided is a method for generating high-quality isosurface mesh in real time, which takes the Marching Cubes algorithm as a baseline to efficiently generate a high-quality mesh of a 3D model, re-examines the case table in the MC algorithm, and puts forward the concept of equivalent edges. By combining with the remeshing technology, the MC algorithm is optimized from three aspects: deleting equivalent edges with the worst performance from the case table by using connectivity modification and vertex insertion technology; changing the geometric shape of the active edge to make it more perpendicular to the isosurface; and moving the shared cut points of cube cells. According to the present application, the mesh with a higher quality is generated at the running speed close to the standard MC algorithm, and the mesh quality is close to the post-processing remeshing algorithm with extremely high time consumption. | 2022-04-07 |
20220108528 | INFORMATION PROCESSING METHOD AND DEVICE, POSITIONING METHOD AND DEVICE, ELECTRONIC DEVICE AND STORAGE MEDIUM - An information processing method includes: three-dimensional (3D) point information of a 3D point cloud is obtained; a two-dimensional (2D) point cloud image from projection of the 3D point cloud on a horizontal plane is generated based on the 3D point information; and projection coordinates of 3D points comprised in the 3D point cloud in a reference coordinate system of a reference plane graph are determined based on a consistency degree that the 2D point cloud image has with the reference plane graph, where the reference plane graph is used for representing a projection graph with reference coordinates that is obtained through the projection of a target object on the horizontal plane, and the 3D point cloud is used for representing 3D space information of the target object. | 2022-04-07 |
20220108529 | DISPLAYING A VIRTUAL IMAGE OF A BUILDING INFORMATION MODEL - A headset for use in displaying a virtual image of a building information model (BIM) in relation to a site coordinate system of a construction site. The headset comprises an article of headwear having one or more position-tracking sensors mounted thereon, augmented reality glasses incorporating at least one display, a display position tracking device for tracking movement of the display relative to at least one of the user's eyes and an electronic control system. The electronic control system is configured to convert a BIM model defined in an extrinsic, real world coordinate system into an intrinsic coordinate system defined by a position tracking system, receive display position data from the display position device and headset tracking data from a headset tracking system and render a virtual image of the BIM relative to the position and orientation of the article of headwear on the construction site and relative position of the display relative to the user's eye and transmit the rendered virtual image to the display which is viewable by the user as a virtual image of the BIM. | 2022-04-07 |
20220108530 | SYSTEMS AND METHODS FOR PROVIDING AN AUDIO-GUIDED VIRTUAL REALITY TOUR - Systems and methods are provided for providing an audio-guided in-door virtual reality (VR) tour. An exemplary system may include a communication interface configured to receive input from a user and to output media contents, a memory storing computer-readable instructions, and at least one processor coupled to the communication interface and the memory. The computer-readable instructions, when executed by the processor, may cause the at least one processor to perform operations. The operations may include displaying a view of a 3D VR environment and playing an audio guide associated with the view. The operations may also include detecting, during the playing of the audio guide, a target operation input by the user to alter the view. In response to the detection of the target operation, the operations may include adjusting, based on the detected target operation, the view with respect to a fixed point position within the 3D VR environment. | 2022-04-07 |
20220108531 | REACTIVE AUGMENTED REALITY - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating composite images. One of the methods includes maintaining first data associating each location within an environment with a particular time; obtaining an image depicting the environment from a point of view of a display device; obtaining second data characterizing one or more virtual objects; and processing the obtained image and the second data to generate a composite image depicting the one or more virtual objects at respective locations in the environment from the point of view of the display device, wherein the composite image depicts each virtual object according to the particular time that the first data associates with the location of the virtual object in the environment. | 2022-04-07 |
20220108532 | METHOD AND SYSTEM FOR AUGMENTED REALITY WI-FI COVERAGE MAP - An exemplary device can process can generated an augmented reality display of a Wi-Fi coverage map. A mobile device can connect to one or more access points of a wireless network in a physical environment. A camera of the mobile device can be used to capture a live rendering of the physical environment. The mobile device can capture and store current and previous positions in the physical environment. The mobile device can generate a virtual path graphic by linking the current positions and the plurality of previous positions of the mobile device in the physical environment. The augmented reality display is generated by overlaying the virtual path graphic onto the live rendering of the physical environment. The augmented reality interface is output to a display of the mobile device. | 2022-04-07 |
20220108533 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing device, an information processing method, and a program for enabling display of AR content that has been generated for a predetermined environment and is applied to the real environment. The information processing device according to one aspect of the present technology generates a template environment map showing the environment of a three-dimensional space that is to be a template and in which a predetermined object exists, and generates template content that is a template to be used in generating display content for displaying an object superimposed on the environment of a real space, the template content including information about the object disposed at a position in the three-dimensional space, the position having a predetermined positional relationship with the predetermined object. The present technology can be applied to a transmissive HMD, for example. | 2022-04-07 |
20220108534 | Network-Based Spatial Computing for Extended Reality (XR) Applications - Feature information is received from a client device and describes extended reality scene(s) in an extended reality (ER) environment at the client device. An ER description is formed in an ER description format corresponding to the feature information and is stored. Some of the stored ER description format is provided in a representational format upon request of the client device or other client devices viewing the one or more ER scenes and assisting the positioning of corresponding client devices in the ER environment. A client device captures environmental visual data and generates feature information, describing ER scene(s) in an extended reality environment at the client device, from the environmental visual data, and sends the generated feature information toward a server. A client device can localize itself in a | 2022-04-07 |
20220108535 | MISSION DRIVEN VIRTUAL CHARACTER FOR USER INTERACTION - An augmented reality (AR) display device can display a virtual assistant character that interacts with the user of the AR device. The virtual assistant may be represented by a robot (or other) avatar that assists the user with contextual objects and suggestions depending on what virtual content the user is interacting with. Animated images may be displayed above the robot's head to display its intents to the user. For example, the robot can run up to a menu and suggest an action and show the animated images. The robot can materialize virtual objects that appear on its hands. The user can remove such an object from the robot's hands and place it in the environment. If the user does not interact with the object, the robot can dematerialize it. The robot can rotate its head to keep looking at the user and/or an object that the user has picked up. | 2022-04-07 |
20220108536 | METHOD OF DISPLAYING AUGMENTED REALITY AND ELECTRONIC DEVICE FOR PERFORMING SAME - An electronic device for displaying augmented reality and a method of displaying augmented reality content received from a server, on an image obtained with respect to a real space including a target object are provided. The method includes obtaining location information of the electronic device and field of view information of the electronic device, transmitting, to the server, the location information of the electronic device and the field of view information of the electronic device, receiving, from the server, the augmented reality content generated by the server based on the location information of the electronic device and the field of view information of the electronic device, and displaying the received augmented reality content on the image. | 2022-04-07 |
20220108537 | AUGMENTED REALITY COLLABORATION SYSTEM - A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device. | 2022-04-07 |