27th week of 2022 patent applcation highlights part 47 |
Patent application number | Title | Published |
20220215572 | POINT CLOUD ANALYSIS DEVICE, METHOD, AND PROGRAM - Provided is a point cloud analysis device that curbs a decrease in model estimation accuracy due to a laser measurement point cloud. A clustering unit ( | 2022-07-07 |
20220215573 | CAMERA POSE INFORMATION DETECTION METHOD AND APPARATUS, AND CORRESPONDING INTELLIGENT DRIVING DEVICE - A camera pose information detection method, includes collecting a plurality of images by means of a multi-lens camera, and determining a road surface region image based on the plurality of images; and projecting points in the road surface region image onto a camera coordinate system, so as to obtain camera pose information. | 2022-07-07 |
20220215574 | MAP PROCESSING METHOD AND APPARATUS - The present disclosure provides a map processing method and apparatus. The specific implementation scheme is: determining a road to be processed in a map, where the road has a first edge line and a second edge line; performing interpolation processing on M gauge points on the first edge line to obtain a first point set of T first sampling points on the first edge line and performing interpolation processing on N gauge points on the second edge line to obtain a second point set of T second sampling points on the second edge line; determining a corresponding relationship between the T first sampling points in the first point set and the T first sampling points in the second point set; and determining a center line of the road in the map according to M, N and the corresponding relationship. | 2022-07-07 |
20220215575 | DISPLAY METHOD, DISPLAY SYSTEM AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM - A display method for displaying a virtual object, comprising: by at least one processor, recognizing characteristics of a main object from a video; by the at least one processor, recognizing a first fixed object from the video; by the at least one processor, determining a first target point in the video according to the characteristics of the main object; by the at least one processor, calculating a first position relationship between the first fixed object and the first target point; by the at least one processor, determining an anchor point in a virtual environment; and by the at least one processor, controlling a display device to display the virtual object at a second target point in the virtual environment by setting a second position relationship corresponding to the first position relationship between the anchor point and the second target point. | 2022-07-07 |
20220215576 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, an information processing device is configured to: estimate, based on target image data, target position data representing an image-capturing position at which the target image data is captured; estimate, based on the target image data, target depth data representing a distance to an object from the image-capturing position; register new image-capturing information including the target image data, the target position data, and the target depth data; display, on a display device, map data representing a map of an environment in which the image data is captured; and display at least one of image data and depth data included in designated image-capturing information that is designated from among a plurality of pieces of image-capturing information, in association with a pixel position in the map data, the pixel position corresponding to position data included in the designated image-capturing information. | 2022-07-07 |
20220215577 | COMPUTER-READABLE RECORDING MEDIUM STORING POSITION IDENTIFICATION PROGRAM, POSITION IDENTIFICATION METHOD, AND INFORMATION PROCESSING APPARATUS - A recording medium stores a program for causing a computer to execute processing including acquiring a captured image and a depth image that represents a distance from an imaging position, identifying a road region and another region in contact with the road region from the captured image, calculating a change in a depth of a first region that corresponds to the road region and a change in a depth of a second region that corresponds to the another region included in the depth image, determining whether or not the another region is a detection target based on the change in the depth of the first region and the change in the depth of the second region, and identifying a position of a subject included in the another region based on the depth of the second region and the imaging position when the another region is the detection target. | 2022-07-07 |
20220215578 | SYSTEMS AND METHODS FOR SINGLE IMAGE REGISTRATION UPDATE - A method including receiving information about a pose of each of a plurality of fiducials positioned on or within a patient; causing an imaging device to generate a single image of a portion of the patient, the single image depicting at least a portion of each of the plurality of fiducials; determining, based on the information and the single image, a pose of one or more anatomical elements represented in the single image; and comparing the determined pose of the one or more anatomical elements to a predetermined pose of the one or more anatomical elements. | 2022-07-07 |
20220215579 | OBJECT DETECTION APPARATUS, OBJECT DETECTION SYSTEM, OBJECT DETECTION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING PROGRAM - Please delete the Abstract of the Disclosure, and replace it with the following: An input image acquisition unit acquires a plurality of input images in which a specific detection target is captured by a plurality of different modalities. A perturbed image acquisition unit acquires a plurality of perturbed images in which at least one of the plurality of input images is perturbed. A detection processing unit detects a detection target included in the input images using each of the plurality of perturbed images and one of the plurality of input images that has not been perturbed, and acquires, for each of the plurality of perturbed images, a detection position of the detection target and a detection confidence level as detection results. An adjustment unit calculates, based on the detection positions and the confidence levels acquired for the plurality of perturbed images, an adjusted confidence level for each of the perturbed images using integrated parameters. | 2022-07-07 |
20220215580 | UNSUPERVISED LEARNING OF OBJECT KEYPOINT LOCATIONS IN IMAGES THROUGH TEMPORAL TRANSPORT OR SPATIO-TEMPORAL TRANSPORT - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for unsupervised learning of object keypoint locations in images. In particular, a keypoint extraction machine learning model having a plurality of keypoint model parameters is trained to receive an input image and to process the input image in accordance with the keypoint model parameters to generate a plurality of keypoint locations in the input image. The machine learning model is trained using either temporal transport or spatio-temporal transport. | 2022-07-07 |
20220215581 | METHOD FOR DISPLAYING THREE-DIMENSIONAL AUGMENTED REALITY - A method of displaying 3-dimensional (3D) augmented reality includes transmitting a first image generated by photographing a target object at a first time point, and storing first view data at the first time point; receiving first relative pose data of the target object; estimating pose data of the target object, based on the first view data and the first relative pose data of the target object; generating a second image by photographing the target object at a second time point, and generating second view data at the second time point; estimating second relative pose data of the target object, based on the pose data of the target object and the second view data; rendering a 3D image of a virtual object, based on the second relative pose data of the target object; and generating an augmented image by augmenting the 3D image of the virtual object on the second image. | 2022-07-07 |
20220215582 | CONVERSION PARAMETER CALCULATION METHOD, DISPLACEMENT AMOUNT CALCULATION METHOD, CONVERSION PARAMETER CALCULATION DEVICE, AND DISPLACEMENT AMOUNT CALCULATION DEVICE - A conversion parameter calculation method includes: obtaining, from a first image capturing device, first image data obtained by the first image capturing device capturing an image of an object; obtaining, from a second image capturing device, second distance data indicating a distance to the object, and second image data obtained by the second image capturing device capturing an image of the object; obtaining displacement direction information indicating a direction of a displacement of the object in three dimensions; associating positions on the object in the first image data and the second image data; estimating a position of the first image capturing device; calculating first distance data indicating a distance from the first image capturing device to the object; and calculating a conversion parameter for converting a pixel displacement amount of the object into the actual displacement amount, using the first distance data and the displacement direction information. | 2022-07-07 |
20220215583 | IMAGE PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM - This disclosure provides an image processing method and apparatus, an electronic device, and a storage medium. The method includes: obtaining, by an electronic device, a reference surface model of a first skeleton posture of a virtual object; obtaining one or more second skeleton postures of the virtual object; generating an one or more exemplary surface models of the one or more second skeleton postures based on the reference surface model and the one or more second skeleton postures; determining a posture transformation matrix between the first skeleton posture and the second skeleton posture; and converting the exemplary surface model and the reference surface model to a same coordinate space based on the posture transformation matrix, to generate the virtual object according to the reference surface model and the one or more exemplary surface models in the coordinate space. | 2022-07-07 |
20220215584 | Apparatus for Calibrating Retinal Imaging Systems and Related Methods - A model eye for calibrating a retinal imaging system is provided including a backplane having a negative radius of curvature R centered on the optical axis and a clear aperture diameter D | 2022-07-07 |
20220215585 | THREE-DIMENSIONAL MEASUREMENT DEVICE, THREE-DIMENSIONAL MEASUREMENT METHOD, AND THREE-DIMENSIONAL MEASUREMENT PROGRAM - A three-dimensional measurement device includes an image data receiving unit, a camera position selecting unit, a stereoscopic image selecting unit, and a camera position calculator. The image data receiving unit receives data of multiple photographic images. The photographic images are obtained by photographing a measurement target object and a random dot pattern from multiple surrounding viewpoints by use of a camera. The camera position selecting unit selects camera positions from among multiple positions of the camera. The stereoscopic image selecting unit selects the photographic images as stereoscopic images from among the photographic images that are taken from the camera positions selected by the camera position selecting unit. The camera position calculator calculates the camera position from which the stereoscopic images are taken. The selection of the camera positions is performed multiple times in such a manner that at least one different camera position is selected each time. | 2022-07-07 |
20220215586 | CALIBRATION OF MOBILE ELECTRONIC DEVICES CONNECTED TO HEADSETS WEARABLE BY USERS - A mobile electronic device is provided for use with a headset. A camera outputs digital pictures of a portion of the headset. A display device displays information for viewing by a user wearing the headset. A processor retrieves calibration parameters that characterize at least a pose of the camera relative to the display device, and processes a digital picture from the camera to identify a pose of an optically identifiable feature within the digital picture. A pose of the mobile electronic device is identified relative to the holder based on the identified pose of the optically identifiable feature within the digital picture and based on at least the pose of the camera relative to the display device as characterized by the calibration parameters. The processor controls where graphical objects are rendered on the display device based on the identified pose of the mobile electronic device relative to the holder. | 2022-07-07 |
20220215587 | COLOR PALETTE FOR CAPTURING PERSON'S IMAGE FOR DETERMINATION OF FACIAL SKIN COLOR, AND METHOD AND APPARATUS USING SAME - Various embodiments relate to a color palette for capturing a person's image for determination of a facial skin color, and a method and an apparatus using same. Various embodiments may provide a color palette, and a method and an apparatus using same, the color palette comprising: a central area which is provided to define a skin region in a facial image, and is empty or transparent; and a plurality of color areas which are provided to define a plurality of reference regions for use in determining a skin color of the skin region in the facial image, and arranged to surround the central area and disposed according to a rule determined on the basis of at least one color characteristic. | 2022-07-07 |
20220215588 | IMAGE SIGNAL PROCESSOR FOR PROCESSING IMAGES - Techniques are provided for using one or more machine learning systems to process input data including image data. The input data including the image data can be obtained, and at least one machine learning system can be applied to at least a portion of the image data to determine at least one color component value for one or more pixels of at least the portion of the image data. Based on application of the at least one machine learning system to at least the portion of the image data, output image data for a frame of output image data can be generated. The output image data includes at least one color component value for one or more pixels of the frame of output image data. Application of the at least one machine learning system causes the output image data to have a reduced dimensionality relative to the input data. | 2022-07-07 |
20220215589 | IMAGE PROCESSING METHOD AND IMAGE PROCESSING APPARATUS - According to one embodiment, an image processing method causes a computer to function as an image processing apparatus including an acquisition unit and a control unit. The acquisition unit acquires image data. The control unit performs a control process of detecting a character string of each color included in the image data and associating the character string with the color using the character string of the color. | 2022-07-07 |
20220215590 | MULTIMODAL SENSOR MEASUREMENT FUSION THROUGH A COMBINED GEOMETRICAL APPROACH OF TIME WARPING AND OCCLUSION SURFACE RAY PROJECTION - Systems, methods, controllers, and techniques for addressing the parallax occlusion effect caused by non-collocated sensors are disclosed. A controller is configured to fuse image data received from an imaging device and depth data received from a depth sensor to form a mesh, project a ray from the imaging device to a pixel of the image data fused with a point of the depth data forming the mesh, determine an occlusion boundary surface within the depth data, and in response to determining that the ray intersects the occlusion boundary surface, determine that the imaging device is occluded from a fused point in the mesh. | 2022-07-07 |
20220215591 | GAS SENSOR - A gas detector for revealing a target gas includes an image capturing unit with multiple optical channels, an image processing unit and a calculation unit. The image processing unit is adapted for deducing a value of a radiation transmission coefficient which relates to an analysis spectral band, and which is attributable to a quantity of the target gas present in a part of the field-of-view. Preferably multiple analysis bands are used in parallel. The calculation unit is adapted for deducing an evaluation of the quantity of the target gas based on the value of the radiation transmission coefficient which relates to each analysis band. Such a gas detector may have small dimensions, be easily transportable, including on board a drone, and can provide evaluation results for the quantity of the target gas in real-time or nearly real-time. | 2022-07-07 |
20220215592 | NEURAL IMAGE COMPRESSION WITH LATENT FEATURE-DOMAIN INTRA-PREDICTION - A method of decoding an image with latent feature-domain intra-prediction is performed by at least one processor and includes receiving a set of latent blocks and for each of the blocks in the set of latent blocks: predicting a block, based on a set of previously recovered blocks; receiving a selection signal indicating a currently recovered block, based on the selection signal performing one of (1) and (2): (1) generating a compact residual, a set of residual context parameters, a decoded residual, and generating a first decoded block; (2) generating a second decoded block, based on a compact representation block and a set of context parameters. The method further includes generating a set of recovered blocks comprising each of the currently recovered blocks; generating a recovered latent image by merging all the blocks in the set of recovered blocks; and decoding the recovered latent image, to obtain a reconstructed image. | 2022-07-07 |
20220215593 | MULTIPLE NEURAL NETWORK MODELS FOR FILTERING DURING VIDEO CODING - An example device for filtering decoded video data includes one or more processors configured to execute a neural network filtering unit to: receive data from one or more other units of the device, the data from the one or more other units of the device being different than data for a decoded picture of video data, and wherein to receive the data from the one or more other units of the device, the one or more processors are configured to execute the neural network filtering unit to receive boundary strength data from a deblocking unit of the device; determine one or more neural network models to be used to filter a portion of the decoded picture; and filter the portion of the decoded picture using the one or more neural network models and the data from the one or more other units of the device, including the boundary strength data. | 2022-07-07 |
20220215594 | AUTO-REGRESSIVE VIDEO GENERATION NEURAL NETWORKS - A method for generating a video is described. The method includes: generating an initial output video including multiple frames, each of the frames having multiple channels; identifying a partitioning of the initial output video into a set of channel slices that are indexed according to a particular slice order, each channel slice being a down sampling of a channel stack from a set of channel stacks; initializing, for each channel stack in the set of channel stacks, a set of fully-generated channel slices; repeatedly processing, using an encoder and a decoder, a current output video to generate a next fully-generated channel slice to be added to the current set of fully-generated channel slices; generating, for each channel index, a respective fully-generated channel stack using the respective fully generated channel slices; and generating a fully-generated output video using the fully-generated channel stacks. | 2022-07-07 |
20220215595 | SYSTEMS AND METHODS FOR IMAGE COMPRESSION AT MULTIPLE, DIFFERENT BITRATES - Systems and methods for predicting a target set of pixels are disclosed. In one embodiment, a method may include obtaining target content. The target content may include a target set of pixels to be predicted. The method may also include convolving the target set of pixels to generate an estimated set of pixels. The method may include matching a second set of pixels in the target content to the target set of pixels. The second set of pixels may be within a distance from the target set of pixels. The method may include refining the estimated set of pixels to generate a refined set of pixels using a second set of pixels in the target content. | 2022-07-07 |
20220215596 | MODEL-BASED PREDICTION FOR GEOMETRY POINT CLOUD COMPRESSION - Techniques are disclosed for coding point cloud data using a scene model. An example device for coding point cloud data includes a memory configured to store the point cloud data and one or more processors implemented in circuitry and communicatively coupled to the memory. The one or more processors are configured to determine or obtain a scene model corresponding with a first frame of the point cloud data, wherein the scene model represents objects within a scene, the objects corresponding with at least a portion of the first frame of the point cloud data. The one or more processors are also configured to code a current frame of the point cloud data based on the scene model. | 2022-07-07 |
20220215597 | VISUAL TIME SERIES VIEW OF A WOUND WITH IMAGE CORRECTION - Disclosed are processes including receiving at least a first and a second image data record corresponding to a first and a second point in time and including a first and a second one or more images of a wound; obtaining an image of the wound from a particular point of view corresponding to the first point in time by analyzing the first image data record; generating a simulated image of the wound from the particular point of view corresponding to the second point in time by analyzing the second image data record; and generating a visual time series view of the wound including at least the image of the wound from the particular point of view corresponding to the first point in time and the simulated image of the wound from the particular point of view corresponding to the second point in time. | 2022-07-07 |
20220215598 | INFINITELY LAYERED CAMOUFLAGE - A camouflage pattern is provided that appears to have infinite focus and depth of field even at 100 percent size for the elements in the camouflage pattern. Generally, three-dimensional (3D) models of elements to be used in the camouflage pattern are captured or generated. The models are then arranged in a scene with a background (e.g., an infinite background) via 3D graphics editing programs such as is used to render computer generated graphics in video games and movies. A two-dimensional (2D) capture of the scene thus shows all visible surfaces of the elements in the scene in focus at all depths of field. The elements may or may not be shaded by one another from the perspective of the image capture location in the 3D environment. | 2022-07-07 |
20220215599 | SYSTEMS AND METHODS FOR MOTION ESTIMATION IN PET IMAGING USING AI IMAGE RECONSTRUCTIONS - A computer-implemented method for generating a motion corrected image is provided. The method includes receiving listmode data collected by an imaging system; producing two or more histo-image frames or two or more histo-projection frames based on the listmode data; providing the two or more histo-image frames or two or more histo-projection frames to an Artificial Intelligence (AI) system; receiving two or more AI reconstructed images from the AI system based on the two or more histo-image frames or the two or more histo-projection frames; and generating a motion estimation in reconstructed images by using a motion free AI reconstructed image frame as a reference frame. | 2022-07-07 |
20220215600 | Data-consistency for Image Reconstruction - A computer-implemented method includes, based on an input dataset defining an input image, determining a reconstructed image using a reconstruction algorithm, and executing a data-consistency operation for enforcing consistency between the input image and the reconstructed image. The data-consistency operation determines, for multiple K-space positions at which the input dataset comprises respective source data, a contribution of respective K-space values associated with the input dataset to a K-space representation of the reconstructed image. | 2022-07-07 |
20220215601 | Image Reconstruction by Modeling Image Formation as One or More Neural Networks - Systems and methods for image reconstruction based on modeling image formation as one or more neural networks. In accordance with one aspect, one or more neural networks are configured based on physics of image formation ( | 2022-07-07 |
20220215602 | CONE BEAM ARTIFACT CORRECTION FOR GATED IMAGING - A system includes a reconstructor ( | 2022-07-07 |
20220215603 | NAVIGATION USING POINTS ON SPLINES - A system for navigating a host vehicle includes at least one electronic horizon processor to access a map representative of at least a road segment on which the host vehicle travels or is expected to travel, wherein the map includes one or more splines representative of road features associated with the road segment, localize the host vehicle relative to a drivable path for the host vehicle represented among the one or more splines, determine a set of points associated with the one or more splines based on the localization of the host vehicle relative to the drivable path for the host vehicle, and generate a navigation information packet including information associated with the one or more splines and the determined set of points relative to the one or more splines. | 2022-07-07 |
20220215604 | Layer Mapping - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a layer mapping operation are described. A described technique includes receiving a drawing file comprising a first set of layers. A template that defines one or more protocols that control the layer data value aggregation is selected. A set of data values associated with one or more layers in the first set of layers is aggregated in response to applying at least one of the one or more protocols to the drawing file. A second set of layers is generated using the set of data values. A layer mapping output that specifies a second set of layers is generated. The layer mapping output is provided as an input to an application module of a space management program. | 2022-07-07 |
20220215605 | DETECTING REQUIREMENTS FOR A PRODUCT - Examples are disclosed herein that relate to detecting product requirements within a digitized document. One example provides a method comprising: identifying a first page as a summary page, the first page comprising a keyword that refers to a second page; and detecting in the first page a first instance of a pattern comprising a first text block adjacent to a first line. A first part name and a first requirement for a first part are extracted from the first text block. In the first page, a second instance of the pattern is detected comprising a second text block adjacent to a second line. The keyword and a second part name are extracted from the second text block. The second part name and a second requirement for a second part are extracted from the second page. The first requirement and the second requirement are output for storage in a data store. | 2022-07-07 |
20220215606 | Systems and methods of generating a design based on a design template and another design - An apparatus includes a processor configured to receive, during editing of a first design, user input indicating that a first design element has a first content role. The processor is configured to generate a content signature of the first design indicating that the first design element has the first content role, to generate a second design based on a design template, and to update the second design by applying the content signature to the second design. Generating the second design includes, based on determining that the design template includes a second design element having the first content role, adding a third design element having the first content role to the second design. Applying the content signature to the second design includes transferring content from the first design element to the third design element. The processor is configured to generate a graphical user interface including an image of the second design. | 2022-07-07 |
20220215607 | METHOD AND APPARATUS FOR DRIVING INTERACTIVE OBJECT AND DEVICES AND STORAGE MEDIUM - Methods, apparatus, devices, and computer-readable storage media for driving interactive objects are provided. In one aspect, a method includes: obtaining a first image of surroundings of a display device, the display device being configured to display an interactive object and a virtual space where the interactive object is located; obtaining a first position of a target object in the first image; with a position of the interactive object in the virtual space as a reference point, determining a mapping relationship between the first image and the virtual space; and driving the interactive object to execute an action according to the first position and the mapping relationship. | 2022-07-07 |
20220215608 | PERSONALIZED STYLIZED AVATARS - The present disclosure is related to a method to generate user representative avatars that fit within a design paradigm. The method includes receiving depth information corresponding to multiple user features of the user, determining one or more feature landmarks for the user based on the depth information, utilizing the one or more feature landmarks to classify a first user feature relative to an avatar feature category, selecting a first avatar feature from the avatar feature category based on the classification of the first user feature, combining the first avatar feature within an avatar representation to generate a user avatar, and output the user avatar for display. | 2022-07-07 |
20220215609 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM - A configuration that causes an agent such as a character in a virtual world or a robot in the real world to perform actions by imitating actions of a human is to be achieved. An environment map including type and layout information about objects in the real world is generated, actions of a person acting in the real world are analyzed, time/action/environment map correspondence data including the environment map and time-series data of action analysis data is generated, a learning process using the time/action/environment map correspondence data is performed, an action model having the environment map as an input value and a result of action estimation as an output value is generated, and action control data for a character in a virtual world or a robot is generated with the use of the action model. For example, an agent is made to perform an action by imitating an action of a human. | 2022-07-07 |
20220215610 | IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - The technique of this disclosure suppresses a reduction in visibility of a predetermined object in virtual viewpoint image data. An image processing apparatus includes: an image capturing information acquisition unit configured to acquire image capturing information indicating a position and orientation of each of a plurality of image capturing apparatuses; an object information acquisition unit configured to acquire object information indicating a position and orientation of an object to be captured by the image capturing apparatuses, the object having a specific viewing angle; and a determination unit configured to determine, based on the acquired image capturing information and the position and orientation of the object indicated by the acquired object information, an image to be used for generating a virtual viewpoint image according to a position and orientation of a virtual viewpoint among a plurality of images based on capturing by the image capturing apparatuses. | 2022-07-07 |
20220215611 | Graphics Texture Mapping - When performing anisotropic filtering when sampling a texture to provide an output sampled texture value for use when rendering an output in a graphics processing system, an anisotropy direction along which to take samples in the texture is determined by determining X and Y components of a vector of arbitrary length corresponding to the direction of the major axis of an assumed elliptical projection of the sampling point for which the texture is being sampled onto the surface to which the texture is being applied, and then normalising the determined X and Y vector components to provide X and Y components for a unit vector corresponding to the direction of the major axis of the elliptical footprint of the sampling point to be used as the anisotropy direction along which to take samples in the texture. | 2022-07-07 |
20220215612 | Graphics Texture Mapping - When performing anisotropic filtering when sampling a texture in a graphics processing system, a number of positions for which to sample the texture along an anisotropy direction is determined. When the determined number of positions for which to sample the texture along the anisotropy direction is a non-integer value that exceeds a lower integer value by more than a threshold amount, samples are taken along the anisotropy direction in the texture for a number of positions corresponding to the next higher multiple of 2 to the determined non-integer number of positions to be sampled. When the determined number of positions for which to sample the texture along the anisotropy direction does not exceed the lower integer value by at least the threshold amount, samples are taken along the anisotropy direction in the texture for a number of positions corresponding to the lower integer value. | 2022-07-07 |
20220215613 | GRAPHICS TEXTURE MAPPING - When performing anisotropic filtering when sampling a texture to provide an output sampled texture value for use when rendering an output in a graphics processing system, a number of positions for which to sample the texture along an anisotropy direction along which samples will be taken in the texture is determined by determining the square root of the coefficient F for an ellipse having the form Ax | 2022-07-07 |
20220215614 | SYSTEMS AND METHODS FOR DISTRIBUTED SCALABLE RAY PROCESSING - Ray tracing systems have computation units (“RACs”) adapted to perform ray tracing operations (e.g. intersection testing). There are multiple RACs. A centralized packet unit controls the allocation and testing of rays by the RACs. This allows RACs to be implemented without Content Addressable Memories (CAMs) which are expensive to implement, but the functionality of CAMs can still be achieved by implemented them in the centralized controller. | 2022-07-07 |
20220215615 | Intersection Testing in a Ray Tracing System Using Ray Bundle Vectors - Ray tracing systems and computer-implemented methods are described for performing intersection testing on a bundle of rays with respect to a box. Silhouette edges of the box are identified from the perspective of the bundle of rays. For each of the identified silhouette edges, components of a vector providing a bound to the bundle of rays are obtained and it is determined whether the vector passes inside or outside of the silhouette edge. Results of determining, for each of the identified silhouette edges, whether the vector passes inside or outside of the silhouette edge, are used to determine an intersection testing result for the bundle of rays with respect to the box. | 2022-07-07 |
20220215616 | FILE FORMAT FOR POINT CLOUD DATA - A method of point cloud data processing includes determining a 3D region of a point cloud data and a 2D region of a point cloud track patch frame onto which one or more points of the point cloud data are projected; and reconstructing, based on patch frame data of the a point cloud track included in the 2D region and video frame data of corresponding point cloud component tracks, the 3D region of the point cloud data. | 2022-07-07 |
20220215617 | VIEWPOINT IMAGE PROCESSING METHOD AND RELATED DEVICE - A viewpoint image processing method and a related device are provided, and relate to the artificial intelligence/computer vision field. The method includes: obtaining a preset quantity of first viewpoint images; obtaining a geometric feature matrix between the preset quantity of first viewpoint images; generating an adaptive convolution kernel corresponding to each pixel of the preset quantity of first viewpoint images based on the geometric feature matrix and location information of a to-be-synthesized second viewpoint image, where the location information represents a viewpoint location of the second viewpoint image; generating the preset quantity of to-be-processed virtual composite pixel matrices based on the adaptive convolution kernels and the pixels of the preset quantity of existing viewpoint images; and synthesizing the second viewpoint image by using the preset quantity of to-be-processed virtual composite pixel matrices. The method can improve efficiency and quality of synthesizing the second viewpoint image. | 2022-07-07 |
20220215618 | IMAGE PROCESSING METHOD AND APPARATUS, COMPUTER STORAGE MEDIUM, AND ELECTRONIC DEVICE - This application provides an image processing method performed by an electronic device. The method includes: acquiring a diffuse reflection map and a shadow map of a three-dimensional (3D) model of an object, wherein the 3D model is constructed from a plurality of photos of the object within a predefined lighting environment; acquiring a shadow texel in a shadow region of the diffuse reflection map according to a corresponding shadow region in the shadow map; querying an average color lookup table according to spatial coordinate information of the shadow texel for an average brightness difference corresponding to the shadow texel; and determining restoration color information according to the average brightness difference, and restoring color information of the shadow texel according to the restoration color information. In this way, shadow in the diffuse reflection map of the 3D model can be effectively removed or at least attenuated. | 2022-07-07 |
20220215619 | GEOSPATIAL MODELING SYSTEM PROVIDING 3D GEOSPATIAL MODEL UPDATE BASED UPON ITERATIVE PREDICTIVE IMAGE REGISTRATION AND RELATED METHODS - A geospatial modeling system may include a memory and a processor cooperating therewith to: (a) generate a three-dimensional (3D) geospatial model including geospatial voxels based upon a plurality of geospatial images; (b) select an isolated geospatial image from among the plurality of geospatial images; (c) determine a reference geospatial image from the 3D geospatial model using Artificial Intelligence (AI) and based upon the isolated geospatial image; (d) align the isolated geospatial image and the reference geospatial image to generate a predictively registered image; (e) update the 3D geospatial model based upon the predictively registered image; and (f) iteratively repeat (b)-(e) for successive isolated geospatial images. | 2022-07-07 |
20220215620 | GEOSPATIAL MODELING SYSTEM PROVIDING 3D GEOSPATIAL MODEL UPDATE BASED UPON PREDICTIVELY REGISTERED IMAGE AND RELATED METHODS - A geospatial modeling system may include a memory and a processor cooperating therewith to generate a three-dimensional (3D) geospatial model including geospatial voxels based upon a plurality of geospatial images, obtain a newly collected geospatial image, and determine a reference geospatial image from the 3D geospatial model using Artificial Intelligence (AI) and based upon the newly collected geospatial image. The processor may further align the newly collected geospatial image and the reference geospatial image to generate a predictively registered image, and update the 3D geospatial model based upon the predictively registered image. | 2022-07-07 |
20220215621 | THREE-DIMENSIONAL MODEL-BASED COVERAGE PATH PLANNING METHOD FOR UNMANNED AERIAL VEHICLES - A three-dimensional model-based coverage path planning method for an unmanned aerial vehicle, including determining a size of a view frustum of the unmanned aerial vehicle, and establishing a bounding box of the three-dimensional model; designing a three-dimensional grid map according to the view frustum and the bounding box, and defining an attribute of a grid, and arranging a measurement point of the unmanned aerial vehicle on a surface of the three-dimensional model; and planning an unmanned aerial vehicle measurement path on the surface of the three-dimensional model according to the arrangement of measurement points. | 2022-07-07 |
20220215622 | AUTOMATED THREE-DIMENSIONAL BUILDING MODEL ESTIMATION - Automated three-dimensional (3D) building model estimation is disclosed that predicts roof top outlines, pitches and heights based on imagery and 3D data. In an embodiment, a method comprises: obtaining an aerial image of a building based on an input address; obtaining three-dimensional (3D) data containing the building based on the input address; pre-processing the aerial image and 3D data; reconstructing a 3D building model from the pre-processed image and 3D data, the reconstructing including: predicting, using instance segmentation, a mask for each roof component of the building; predicting, using a first machine learning model with the mask as input, an outline for each roof component; predicting, using a second machine learning mode with the mask and outline as input, a pitch and height of each roof component; and rendering the 3D building model based on the predicted outline, pitch and height of each roof component. | 2022-07-07 |
20220215623 | Predictive Information for Free Space Gesture Control and Communication - Free space machine interface and control can be facilitated by predictive entities useful in interpreting a control object's position and/or motion (including objects having one or more articulating members, i.e., humans and/or animals and/or machines). Predictive entities can be driven using motion information captured using image information or the equivalents. Predictive information can be improved applying techniques for correlating with information from observations. | 2022-07-07 |
20220215624 | VIRTUAL OBJECT POSITIONING IN AUGMENTED REALITY APPLICATIONS - Systems and methods include determination of a first component of a set of components under assembly in a physical environment, determination of a first physical position of a user with respect to the first component in the physical environment, determination of a second component of the set of components under assembly to be installed at least partially on the first component based on assembly information associated with the set of components, determination of three-dimensional surface data of the second component, determination of a physical relationship in which the second component is to be installed at least partially on the first component based on a model associated with the set of components, determination of a graphical representation of the second component based on the first physical position of the user with respect to the first component, the physical relationship, and the three-dimensional surface data of the second component, and presentation of the graphical representation to the user in a view including the first component in the physical environment, wherein the presented graphical representation appears to the user to be in the physical relationship with respect to the first component. | 2022-07-07 |
20220215625 | IMAGE-BASED METHODS FOR ESTIMATING A PATIENT-SPECIFIC REFERENCE BONE MODEL FOR A PATIENT WITH A CRANIOMAXILLOFACIAL DEFECT AND RELATED SYSTEMS - Systems and methods for estimating a patient-specific reference bone shape model for a patient with craniomaxillofacial (CMF) defects are described herein. An example method includes receiving a twodimensional (“2D”) pre-trauma image of a subject, and generating a three-dimensional (“3D”) facial surface model for the subject from the 2D pre-trauma image. The method also includes providing a correlation model between 3D facial and bone surfaces, and estimating a reference bone model for the subject using the 3D facial surface model and the correlation model. | 2022-07-07 |
20220215626 | PATIENT SPECIFIC ELECTRODE POSITIONING - A method for determining optimal electrode number and positions for cardiac resynchronization therapy on a heart of a patient is described. The method comprises: generating a 3D mesh of at least part of the heart from a 3D model of at least part of the heart of the patient, the 3D mesh of at least a part of the heart comprising a plurality of nodes; aligning the 3D mesh of at least part of a heart to images of the heart of the patient; and placing additional nodes onto the 3D mesh corresponding to a location of at least two electrodes on the patient. The 3D mesh is used in determining the optimal electrode number and position on the heart of the patient. | 2022-07-07 |
20220215627 | METHOD FOR GENERATING A 3D PHYSICAL MODEL OF A PATIENT SPECIFIC ANATOMIC FEATURE FROM 2D MEDICAL IMAGES - There is provided a method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images. The 2D medical images are uploaded by an end-user via a Web Application and sent to a server. The server processes the 2D medical images and automatically generates a 3D printable model of a patient specific anatomic feature from the 2D medical images using a segmentation technique. The 3D printable model is 3D printed as a 3D physical model such that it represents a 1:1 scale of the patient specific anatomic feature. The method includes the step of automatically identifying the patient specific anatomic feature. | 2022-07-07 |
20220215628 | METHODS AND SYSTEMS FOR PRODUCING CONTENT IN MULTIPLE REALITY ENVIRONMENTS - This disclosure contains methods and systems that allow filmmakers to port filmmaking and editing skills to produce content to be used in other environments, such as video game environments, and augmented reality, virtual reality, mixed reality, and non-linear storytelling environments. | 2022-07-07 |
20220215629 | METHOD FOR GENERATING SCANNING PATH OF MACHINING FEATURE SURFACE OF AIRCRAFT PANEL - A method for generating a scanning path of a machining feature surface of an aircraft panel, including: acquiring a main direction and a triangular mesh model of the aircraft panel; dividing the triangular mesh model into multiple regions; recognizing the machining feature surface according to the main direction; projecting the machining feature surface to a 2D coordinate system thereof; extracting a 2D scanning path for the machining feature surface; and mapping the 2D scanning path to a 3D space to generate a 3D scanning path. | 2022-07-07 |
20220215630 | VISUALLY REPRESENTING RELATIONSHIPS IN AN EXTENDED REALITY ENVIRONMENT - Techniques are described herein that enable a user to provide speech inputs to control an extended reality environment, where relationships between terms in a speech input are represented in three dimensions (3D) in the extended reality environment. For example, a language processing component determines a semantic meaning of the speech input, and identifies terms in the speech input based on the semantic meaning. A 3D relationship component generates a 3D representation of a relationship between the terms and provides the 3D representation to a computing device for display. A 3D representation may include a modification to an object in an extended reality environment, or a 3D representation of a concepts and sub-concepts in a mind map in an extended reality environment, for example. The 3D relationship component may generate a searchable timeline using the terms provided in the speech input and a recording of an extended reality session. | 2022-07-07 |
20220215631 | METHOD AND COMPUTER PROGRAM PRODUCT FOR PROCESSING MODEL DATA OF A SET OF GARMENTS - A method for processing model data of a set of garments includes storing first and second model data of a first and a second garment of the set, each of the model data including two-dimensional or three-dimensional geometry data defining a mesh associated with the respective garment. A limiting object of the respective garment is defined by at least a portion of the geometry data and constitutes a separation between an interior and an exterior of the respective garment. The first and the second garment constitute an inner garment and an outer garment that is worn over the inner garment. At least one opening object for the outer garment is stored, defined as a portion of the limiting object and constituting a transition for an item between the interior and the exterior of the outer garment. Intersection objects are determined for each of the garments, defining one or more intersections between the limiting objects of the garments. For each garment, portions of the limiting objects are determined as overlap section objects bounded by one or more of the intersection objects. The geometry data of the first and/or the second garment are adjusted with respect to the overlap section objects based on whether the respective overlap section object of the outer garment at least partially comprises one or more of the at least one opening objects of the outer garment. | 2022-07-07 |
20220215632 | CONTROL METHOD AND APPARATUS FOR MOVABLE PLATFORM, AND CONTROL SYSTEM THEREOF - The present disclosure relates to a method for controlling a movable platform. The method may include obtaining a target object selection operation input by a user on an interaction interface, the interaction interface displaying a three-dimensional model of an operation area, the target object selection operation configured to determine a position of a target object in the operation area; determining a target orientation of the movable platform when the target object is operated based on an orientation of the three-dimensional model displayed on the interaction interface when the target object selection operation is obtained; and determining a target position of the movable platform when the movable platform performs operation on the target object based on the position of the target object, the target orientation, and an operation distance of the movable platform. | 2022-07-07 |
20220215633 | Computing Platform for Facilitating Augmented Reality Experiences with Third Party Assets - Systems and methods for data asset acquisition and obfuscation can be helpful for retrieving augmented reality rendering data assets from third parties. The sending of a software development kit and receiving back data assets can ensure the data assets are compatible with the augmented reality rendering experience in the user interface. The data acquisition system with obfuscation can also ensure the code generated by third parties is stripped of semantics and has reduced readability. | 2022-07-07 |
20220215634 | Augmenting a Physical Object with Virtual Components - Systems and methods are presented for immersive and simultaneous animation in a mixed reality environment, Techniques disclosed represent a physical object, present at a scene, in a 3D space of a virtual environment associated with the scene. A virtual element is posed relative to the representation of the physical object in the virtual environment. The virtual element is displayed to users from a perspective of each user in the virtual environment. Responsive to an interaction of one user with the virtual element, an edit command is generated and the pose of the virtual element is adjusted in the virtual environment according to the edit command. The display of the virtual element to the users is then updated according to the adjusted pose. When simultaneous and conflicting edit commands are generated by collaborating users, policies to reconcile the conflicting edit commands are disclosed. | 2022-07-07 |
20220215635 | NOVEL SYSTEMS AND METHODS FOR COLLECTING, LOCATING AND VISUALIZING SENSOR SIGNALS IN EXTENDED REALITY - Systems and methods for rendering one or more different types of datasets is described. An exemplar method includes: (i) obtaining a first type of dataset, wherein each first data value within the first type of dataset is associated with one or more three-dimensional coordinates, which define a location or region in real space; (ii) obtaining a second type of dataset, wherein each second data value within the second type of dataset is associated with one or more of the three-dimensional coordinates; (iii) spatializing the first type of dataset to create a first type of spatialized dataset; (iv) spatializing the second type of dataset to create a second type of spatialized dataset; (v) aligning the first type of spatialized dataset with the second type of spatialized dataset to create an enhanced three-dimensional spatialized environment; and (vi) rendering the enhanced three-dimensional spatialized environment on a display component. | 2022-07-07 |
20220215636 | CREATING CLOUD-HOSTED, STREAMED AUGMENTED REALITY EXPERIENCES WITH LOW PERCEIVED LATENCY - A technology that streams graphical components and rendering instructions to a client device, for the client device to perform the final rendering and overlaying of that content onto the client's video stream based on the client's most recent tracking of the device's position and orientation. A client device sends a request for augmented reality drawing data to a network device. In response, the network device generates augmented reality drawing data, which can be augmented reality change data based on the augmented reality information and previous client render state information, and sends the augmented reality drawing data to the client device. The client device receives the augmented reality drawing data and renders a visible representation of an augmented reality scene comprising overlaying augmented reality graphics over a current video scene obtained from a camera of the client device. | 2022-07-07 |
20220215637 | ACTIVATION OF EXTENDED REALITY ACTUATORS BASED ON CONTENT ANALYSIS - In one example, a method performed by a processing system in a telecommunications network includes acquiring the media stream and identifying an anchor in a scene of the media stream. The anchor is a presence in the scene that has a physical effect on the scene. A type and a magnitude of the physical effect of the anchor on the scene is estimated. An actuator in a vicinity of the user endpoint device that is capable of producing a physical effect in the real world to match the physical effect of the anchor on the scene is identified. A signal is sent to the actuator. The signal controls the actuator to produce the physical effect in the real world when the physical effect of the anchor on the scene occurs in the media stream. | 2022-07-07 |
20220215638 | SYSTEM AND METHOD FOR HAPTIC MAPPING OF A CONFIGURABLE VIRTUAL REALITY ENVIRONMENT - A system for providing a virtual reality experience includes a plurality of wall panels at a plurality of wall panel locations defining an X by Y area within the interior of the plurality of wall panels. At least one haptic feedback device associated with at least one of the plurality of wall panels for providing haptic feedback to a user. A transition area is associated with the X by Y area. A virtual reality system displays a first virtual reality environment to the user within at least a portion of the X by Y area. The first virtual reality environment defines a first virtual reality pathway for the user within the X by Y area to the at least one haptic feedback device. The virtual reality system displays a second virtual reality environment different from the first virtual reality environment within at least a portion of the X by Y area. The second virtual reality path defining a second virtual pathway for the user within the X by Y area to the at least one haptic feedback device that is different from the first virtual pathway responsive to the user entering the transition area while in the first virtual reality environment. | 2022-07-07 |
20220215639 | Data Presentation Method and Terminal Device - This application discloses a data presentation method applied to the field of autonomous driving technologies. The method includes: obtaining traveling information of an autonomous driving apparatus and/or requirement information of a user for data presentation; determining, based on the traveling information of the autonomous driving apparatus and/or the requirement information of the user for data presentation, first point cloud data related to the autonomous driving apparatus, and determining a presentation manner of the first point cloud data, where the first point cloud data is data represented in a form of a plurality of points; and presenting the first point cloud data in the presentation manner. In embodiments of this application, the point cloud data related to the autonomous driving apparatus can be adaptively presented based on the traveling information and/or the user requirement information, and not all data needs to be presented, so that data processing complexity is reduced. | 2022-07-07 |
20220215640 | SYSTEMS AND METHODS FOR ARTIFICIAL INTELLIGENCE-BASED VIRTUAL AND AUGMENTED REALITY - Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor on a head-wearable device. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor on the head-wearable device, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a see-through display of the head-wearable device. A stimulus is presented at the second time via a virtual companion displayed via the see-through display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event. | 2022-07-07 |
20220215641 | MODEL GENERATION APPARATUS, MODEL GENERATION SYSTEM, MODEL GENERATION METHOD - An object is to provide a model generation apparatus capable of generating a model for implementing a more precise simulation. Firstly, an object to be reconstructed on a 3D model is extracted from 3D image information, and an object model having a highest shape conformity degree with the object is acquired from among a plurality of object models available on the 3D model, and is associated with size information and disposed-place information of the object. Next, for each of acquired object models, the extracted object model is edited so as to conform with the size information of the object. Then, the edited object model is disposed on the 3D model so that the object model satisfies a physical constraint on the 3D model and conforms with the disposed-place information. | 2022-07-07 |
20220215642 | SYNCHRONIZED EDITING OF LOCALLY REPEATING VECTOR GEOMETRY - Embodiments are disclosed for synchronously editing locally repeating vector geometry. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a selection of a first plurality of segments of a vector-based object to be edited, generating a stencil mask of the first plurality of segments, the stencil mask representing segment placement and primitive types for each of the first plurality of segments, identifying a second plurality of segments of the vector-based object using the stencil mask and a stencil predicate, determining a transform between the first plurality of segments and the second plurality of segments, receiving an edit to the first plurality of segments, and applying the edit to the second plurality of segments using the transform. | 2022-07-07 |
20220215643 | IMAGE PROCESSING DEVICE, IMAGING DEVICE, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM - An imaging device acquires an input image using a lens unit and an imaging element and detects a subject. The imaging device calculates a reliability of detection of a subject and compares the reliability with a threshold value. When the reliability of detection of a subject is less than the threshold value, the imaging device performs a defocus calculating process and a background area determining process. The imaging device performs a low-pass filtering process on the determined background area, decreases a high-frequency component in the background area, and then detects a subject again. | 2022-07-07 |
20220215644 | IMAGE PROCESSING METHOD AND COMPUTING DEVICE - In an image processing method, a detection image and a marked image are obtained. An image segmentation model is applied to segment a first segmented image from the detection image. The first segmented image is corrected according to the marked image to obtain a second segmented image. A size of the second segmented image is adjusted to obtain an adjusted segmented image. The adjusted segmented image is used as a standard segmented image of the detection image. The method improves accuracy of image segmentation and recognition. | 2022-07-07 |
20220215645 | Computer Vision Systems and Methods for Determining Roof Conditions from Imagery Using Segmentation Networks - Computer vision systems and methods for determining roof conditions from imagery using segmentation networks are provided. The system obtains at least one image from an image database having a roof structure present therein, and determines a footprint of the roof structure using a neural network. Based on segmentation processing by the neural network, the system generates a single channel image that maps each pixel in the at least one image to a binary classification indicative of whether each pixel is or is not representative of a roof structure and executes a contour extraction algorithm on the single channel image to determine the footprint of the roof structure. Then, the system determines condition features of the roof structure using the neural network, defines roof structure condition features, detects the roof structure condition features via segmentation, and generates a single channel image that maps each pixel in the obtained image to a condition label indicative of a defined roof structure condition feature. A roof structure condition feature report indicative of condition features of the roof structure and their respective contributions toward the total roof structure can be generated. | 2022-07-07 |
20220215646 | ABDOMINAL MULTI-ORGAN SEGMENTATION WITH ORGAN-ATTENTION NETWORKS - Systems, methods, and apparatus for segmenting internal structures depicted in an image. In one aspect, a method includes receiving data representing image data that depicts internal structures of a subject, providing an input data structure to a machine learning model, wherein the input data structure comprises fields structuring data that represents the received data representing the image data that depicts internal structures of the subject, wherein the machine learning model is a multi-stage deep convolutional network that has been trained to segment internal structures depicted by one or more images, receiving output data generated by the machine learning model based on the machine learning model's processing of the input data structure, and processing the output data to generate rendering data that, when rendered, a computer, causes the computer to output, for display, data that visually distinguishes between different internal structures depicted by the image data. | 2022-07-07 |
20220215647 | IMAGE PROCESSING METHOD AND APPARATUS AND STORAGE MEDIUM - A picture processing method, apparatus and a storage medium are provided. In the method, a first image comprising a first object and a second image comprising a first garment are acquired; a first fused feature vector is obtained by inputting the first image and the second image to a first model, the first fused feature vector represents a fused feature of the first image and the second image; a second fused feature vector is acquired, the second fused feature vector represents a fused feature of a third image and a fourth image, the third image includes a second object, and the fourth image is an image extracted from the third image and comprises a second garment; and it is determined whether the first object and the second object are a same object according to a target similarity between the first fused feature vector and the second fused feature vector. | 2022-07-07 |
20220215648 | OBJECT DETECTION DEVICE, OBJECT DETECTION SYSTEM, OBJECT DETECTION METHOD, PROGRAM, AND RECORDING MEDIUM - An object position area detection unit of an object detection device detects a position area of an object included in an inputted image, on the basis of a first class definition in which a plurality of classes are defined in advance. A class identification unit identifies which of the plurality of classes the object belongs to, on the basis of a second class definition in which a plurality of classes are defined in advance. An object detection result output unit outputs an object detection result on the basis of a detection result of the object position area detection unit and an identification result of the class identification unit. The number of classes defined by the second class definition is smaller than the number of classes defined by the first class definition. The plurality of classes defined by the second class definition are formed by collecting some of a plurality of classes defined by the first class definition. | 2022-07-07 |
20220215649 | METHOD AND DEVICE FOR IDENTIFYING VIDEO - Disclosed are a method and device for recognizing a video. One specific embodiment of the method comprises: obtaining a video to be identified; inputting said video to a pre-trained local and global representation propagation LGD model to obtain the category of said video, wherein the LGD model learns a spatial-temporal representation in said video based on diffusion between local and global representations. According to this embodiment, the spatial-temporal representation in the video is learned based on diffusion between the local and global representations. | 2022-07-07 |
20220215650 | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM RECORDING MEDIUM - In order to extract a feature suitable for comparison, the information processing device according to the present invention comprises: a prediction unit which, on the basis of the positional relationship between a plurality of objects detected and tracked in an input video and on the basis of the overlap between the plurality of objects, predicts the qualities of features extracted from the objects; a selection unit which selects, from among the plurality of objects, only those objects or that object for which the qualities of features predicted by the prediction unit satisfy a prescribed condition; and a feature extraction unit which extracts features from the objects or the object selected by the selection unit. | 2022-07-07 |
20220215651 | Fiber Placement Tow End Detection Using Machine Learning - A method of inspecting a composite structure formed of plies of tows is provided. The method involves receiving an image of an upper ply overlapping lower plies, the upper ply tow ends defining a boundary between plies, and applying extracted sub-images to a trained machine learning model to detect the upper or lower ply. Probability maps are produced in which pixels of the sub-images are associated with probabilities the pixels belong to an object class for the upper or lower ply. The method may also involve transforming the probability maps into reconstructed sub-images, stitching together a composite image, and applying the composite image to a feature detector to detect locations of tow ends of the upper ply. The method may also involve comparing the locations to as-designed locations of the tow ends, inspecting the composite structure, and indicating a result of the comparison. | 2022-07-07 |
20220215652 | METHOD AND SYSTEM FOR GENERATING IMAGE ADVERSARIAL EXAMPLES BASED ON AN ACOUSTIC WAVE - The disclosure discloses a method and a system for generating image adversarial examples based on an acoustic wave. The method includes: acquiring an image containing a target object or a target scene; generating simulated image examples for the acquired image, wherein the simulated image examples have adversarial effects on a deep learning algorithm in a target machine vision system; optimizing the generated simulated image examples to obtain an optimal adversarial example and corresponding adversarial parameters; and injecting the adversarial parameters into an inertial sensor of the target machine vision system in a manner of an acoustic wave, such that the adversarial parameters are used as sensor readings that will cause an image stabilization module in the target machine vision system to operate to generate particular blurry patterns in a generated real-world image so as to generate an image adversarial example in a physical world. | 2022-07-07 |
20220215653 | TRAINING DATA GENERATION APPARATUS - A selecting unit selects first to third moving image data from a plurality of frame images composing moving image data. A first generating unit generates first training data that is labeled data relating to a specific recognition target from the first moving image data. A first learning unit learns a first model recognizing the specific recognition target by using the first training data. A second generating unit generates second training data from the second moving image data by using the first model. A second learning unit learns a second model by using the second training data. A third generating unit generates third training data from the third moving image data by using the second model. | 2022-07-07 |
20220215654 | FULLY ATTENTIONAL COMPUTER VISION - A system implemented as computer programs on one or more computers in one or more locations that implements a computer vision model is described. The computer vision model includes a positional local self-attention layer that is configured to receive an input feature map and to generate an output feature map. For each input element in the input feature map, the positional local self-attention layer generates a respective output element for the output feature map by generating a memory block including neighboring input elements around the input element, generates a query vector using the input element and a query weight matrix, for each neighboring element in the memory block, performs positional local self-attention operations to generate a temporary output element, and generates the respective output element by summing temporary output elements of the neighboring elements in the memory block. | 2022-07-07 |
20220215655 | CONVOLUTION CALCULATION METHOD AND RELATED DEVICE - An image analysis method and a related device are provided. The method includes: obtaining an input matrix of a network layer A, the input matrix of the network layer A obtained based on a target type image; obtaining a target convolution kernel and a target convolution step length corresponding to the network layer A, different network layers corresponding to different convolution step lengths; performing convolution calculation on the input matrix and the target convolution kernel according to the target convolution step length to obtain an output matrix of the network layer A, the output matrix used for representing a plurality of features included in the target type image; determining a target preset operation corresponding to the target type image according to a pre-stored mapping relationship between a type image and a preset operation; and performing the target preset operation according to the plurality of features included in the target type image. | 2022-07-07 |
20220215656 | METHOD, APPARATUS, DEVICE FOR IMAGE PROCESSING, AND STORAGE MEDIUM - In embodiments of the disclosure, provided is a method for image processing, including: acquiring a first target result from a classification result set, wherein the classification result set is obtained by a first neural network through processing a to-be-processed image, and includes classification results each corresponding to a respective one of a plurality of land cover classes; adjusting the first target result to obtain a second target result; and obtaining an image recognition result according to the second target result and the classification results in the classification result set except the first target result. | 2022-07-07 |
20220215657 | Hybrid Drone Enabled Communications System for Underwater Platforms - A method, apparatus, and method for facilitating communications with an underwater platform. A communications system comprises an unmanned aerial vehicle, a radio frequency communications system, a laser communications system, and a controller. The unmanned aerial vehicle comprises a first section and a second section. The first section is moveably connected to the second section. The radio frequency communications system is connected to the first section of the unmanned aerial vehicle. The radio frequency communications system comprises a first parabolic antenna. The laser communications system is connected to the second section of the unmanned aerial vehicle. The laser communications system comprises a second parabolic antenna. The controller is configured to control the laser communications system to transmit incoming information in a transmit laser beam to the underwater platform submerged in a body of water. The incoming information is from a receive radio frequency signal received by the radio frequency communications system. | 2022-07-07 |
20220215658 | SYSTEMS AND METHODS FOR DETECTING ROAD MARKINGS FROM A LASER INTENSITY IMAGE - Embodiments of the disclosure provide systems and methods for detecting road markings from a laser intensity image. An exemplary method may include receiving, by a communication interface, the laser intensity image acquired by a sensor. The method may also include segmenting the laser intensity image into a plurality of road segments, and dividing a road segment into a plurality of sub-images. The method may further include generating a road marking image corresponding to each of the sub-images based on a semantic segmentation method using a learning model and generating an overall road marking image for the road segment by piecing together the road marking images corresponding to the sub-images of the road segment. | 2022-07-07 |
20220215659 | IMPUTATION OF REMOTE SENSING TIME SERIES FOR LOW-LATENCY AGRICULTURAL APPLICATIONS - Imputation of remote sensing time series for low-latency agricultural applications is provided. In various embodiments, a first time series of raster data is read. The first time series spans a geographic region and has a first resolution and a first frequency. A second time series of raster data is read. The second time series spans the geographic region and has a second resolution and a second frequency. The second resolution is lower than the first resolution. The second frequency is higher than the first frequency. A mean time series is determined from the first time series of raster data. A predicted time series of values for a location within the geographic region is determined at the first resolution by determining a first time series of values for the location from the first time series of raster data, determining a second time series of values of the location from the second time series of raster data, and determining the predicted time series by multiple linear regression with the first time series dependent on the mean time series and the second time series. | 2022-07-07 |
20220215660 | SYSTEMS, METHODS, AND MEDIA FOR ACTION RECOGNITION AND CLASSIFICATION VIA ARTIFICIAL REALITY SYSTEMS - In particular embodiments, a computing system may determine a user intent to perform a task in a physical environment surrounding the user. The system may send a query based on the user intent to a mapping server that stores a three-dimensional (3D) occupancy map containing spatial and semantic information of physical items in the physical environment. The mapping server may be configured to identify a subset of the physical items that are relevant to the user intent. The system may receive, from the mapping server, a response to the query comprising a portion of the 3D occupancy containing the subset of the physical items specific to the user intent. The system may capture a plurality of video frames of the physical environment. The system may process the plurality of video frames and the portion of the 3D occupancy map to provide one or more action labels associated with the task. | 2022-07-07 |
20220215661 | SYSTEM AND METHOD FOR PROVIDING UNSUPERVISED DOMAIN ADAPTATION FOR SPATIO-TEMPORAL ACTION LOCALIZATION - A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions. | 2022-07-07 |
20220215662 | VIDEO SEMANTIC SEGMENTATION METHOD BASED ON ACTIVE LEARNING - The present invention belongs to the technical field of computer vision, and provides a video semantic segmentation method based on active learning, comprising an image semantic segmentation module, a data selection module based on the active learning and a label propagation module. The image semantic segmentation module is responsible for segmenting image results and extracting high-level features required by the data selection module; the data selection module selects a data subset with rich information at an image level, and selects pixel blocks to be labeled at a pixel level; and the label propagation module realizes migration from image to video tasks and completes the segmentation result of a video quickly to obtain weakly-supervised data. The present invention can rapidly generate weakly-supervised data sets, reduce the cost of manufacture of the data and optimize the performance of a semantic segmentation network. | 2022-07-07 |
20220215663 | DATA FUSION ON TARGET TAXONOMIES - A method includes receiving a directive from a user to find an object in a geographical area, wherein the object is identified with an input label selected from a set of labels, obtaining sensor data in response to the directive for a real world physical object in the geographical area using one or more sensors, processing the sensor data with a plurality of automatic target recognition (ATR) algorithms to assign a respective ATR label from the set of labels and a respective confidence level to the real world physical object, and receiving modeled relationships within the set of labels using a probabilistic model based on a priori knowledge encoded in a set of model parameters. The method includes inferring an updated confidence level that the real world physical object actually corresponds to the input label based on the ATR labels and confidences and based on the probabilistic model. | 2022-07-07 |
20220215664 | Timeline-Video Relationship Processing for Alert Events - A method at a server system includes: receiving a video stream from a remote video camera, wherein the video stream comprises a plurality of video frames; selecting a plurality of non-contiguous frames from the video stream, the plurality of non-contiguous frames being associated with a predetermined time interval; encoding the plurality of non-contiguous frames as a compressed video segment associated with the time interval; receiving a request from an application running on a client device to review video from the remote video camera for the time interval; and in response to the request, transmitting the video segment to the client device for viewing in the application. | 2022-07-07 |
20220215665 | PERSONAL PROTECTIVE EQUIPMENT MANAGEMENT SYSTEM USING OPTICAL PATTERNS FOR EQUIPMENT AND SAFETY MONITORING - In general, techniques are described for a personal protective equipment (PPE) management system (PPEMS) that uses images of optical patterns embodied on articles of personal protective equipment (PPEs) to identify safety conditions that correspond to usage of the PPEs. In one example, an article of personal protective equipment (PPE) includes a first optical pattern embodied on a surface of the article of PPE; a second optical pattern embodied on the surface of the article of PPE, wherein a spatial relation between the first optical pattern and the second optical pattern is indicative of an operational status of the article of PPE. | 2022-07-07 |
20220215666 | DISPLAY CONTROL DEVICE, DISPLAY SYSTEM, AND DISPLAY CONTROL METHOD - This display control device includes: a first mobile object information acquiring unit that acquires first mobile object information indicating the position, moving speed, and direction of movement of a first mobile object moving in a facility; a second mobile object information acquiring unit that acquires second mobile object information indicating the position, moving speed, and direction of movement of a second mobile object moving in the facility; an image acquisition unit that acquires, on the basis of the first mobile object information acquired by the first mobile object information acquiring unit and the second mobile object information acquired by the second mobile object information acquiring unit, image information indicating a display image to be displayed in a space in the facility by a display output device installed in the facility; and an image output unit that outputs the image information acquired by the image acquisition unit. | 2022-07-07 |
20220215667 | METHOD AND APPARATUS FOR MONITORING VEHICLE, CLOUD CONTROL PLATFORM AND SYSTEM FOR VEHICLE-ROAD COLLABORATION - A method and apparatus for monitoring a vehicle, a cloud control platform, and a system for vehicle-road collaboration are provided. The method includes: acquiring real-time driving data of each vehicle in a preset vehicle set; matching, in response to receiving event information of an event occurring on a driving road of a vehicle in the preset vehicle set, the event information with the real-time driving data of each vehicle in the preset vehicle set to determine a target vehicle involved in the event; and acquiring video information of the target vehicle during occurrence of the event based on the event information. | 2022-07-07 |
20220215668 | METHOD FOR GENERATING AN IMAGE OF VEHICLE SURROUNDINGS, AND APPARATUS FOR GENERATING AN IMAGE OF VEHICLE SURROUNDINGS - A method is disclosed for generating an image of vehicle surroundings, including: providing multiple vehicle cameras which are arranged in particular on a vehicle bodywork of a vehicle (S | 2022-07-07 |
20220215669 | POSITION MEASURING METHOD, DRIVING CONTROL METHOD, DRIVING CONTROL SYSTEM, AND MARKER - A position measurement method includes a step of acquiring an image of the surroundings at a self-position, a step of detecting an area in which a circular shape appears in the image, and a step of measuring the self-position based on the aspect ratio of the area. | 2022-07-07 |
20220215670 | VEHICULAR TRAILER ANGLE DETECTION SYSTEM FOR FIFTH-WHEEL TRAILERS - A vehicular trailer assist system includes a rearward viewing camera disposed at a vehicle that views a trailer hitched at a fifth wheel hitch at a bed of the vehicle. With the trailer hitched to the fifth wheel hitch at the bed of the vehicle, the vehicular trailer assist system, via processing fisheye-view frames of image data captured by the camera, transforms fisheye-view frames of image data captured by the rearward viewing camera into bird's-eye view frames of image data. The vehicular trailer assist system determines a region of interest (ROI) in a transformed bird's-eye view frames of image data that includes a region where the fifth wheel hitch is present. The vehicular trailer assist system, via a Hough transform that transforms the determined ROI from a Cartesian coordinate system to a polar coordinate system, determines a trailer angle of the trailer relative to the vehicle. | 2022-07-07 |
20220215671 | VEHICULAR CONTROL SYSTEM - A vehicular control system includes a central control module vehicle, a plurality of vehicular cameras disposed at a vehicle and viewing exterior of the vehicle, and a plurality of radar sensors disposed at the vehicle and sensing exterior of the vehicle. The central control module receives vehicle data relating to operation of the vehicle. The central control module is operable to process (i) vehicle data, (ii) image data and (iii) radar data. The central control module at least in part controls at least one driver assistance system of the vehicle responsive to (i) processing at the central control module of vehicle data, (ii) processing at the central control module of image data captured by at least the forward-viewing vehicular camera and (iii) processing at the central control module of radar data captured by at least a front radar sensor. | 2022-07-07 |