Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


46th week of 2021 patent applcation highlights part 54
Patent application numberTitlePublished
20210358144DETERMINING 3-D FACIAL INFORMATION OF A PATIENT FROM A 2-D FRONTAL IMAGE OF THE PATIENT - A method of identifying a particular mask for a patient for use in delivering a flow of breathing gas to the patient is carried out by first receiving a 2-D frontal image of the patient. Next, 3-D facial information of the patient is determined from the 2-D frontal image. At least some of the 3-D facial information is compared with dimensional information of a plurality of candidate masks. Finally, the particular mask for the patient is determined from a result of the comparison of the at least some of the 3-D facial information and the dimensional information of the plurality of candidate masks.2021-11-18
20210358145OBJECT DETECTION BASED ON THREE-DIMENSIONAL DISTANCE MEASUREMENT SENSOR POINT CLOUD DATA - Distance measurements are received from one or more distance measurement sensors, which may be coupled to a vehicle. A three-dimensional (3D) point cloud are generated based on the distance measurements. In some cases, 3D point clouds corresponding to distance measurements from different distance measurement sensors may be combined into one 3D point cloud. A voxelized model is generated based on the 3D point cloud. An object may be detected within the voxelized model, and in some cases may be classified by object type. If the distance measurement sensors are coupled to a vehicle, the vehicle may avoid the detected object.2021-11-18
20210358146METHOD AND APPARATUS WITH MOVING OBJECT DETECTION - A processor-implemented method of detecting a moving object includes: estimating a depth image of a current frame; determining an occlusion image of the current frame by calculating a depth difference value between the estimated depth image of the current frame and an estimated depth image of a previous frame; determining an occlusion accumulation image of the current frame by adding a depth difference value of the occlusion image of the current frame to a depth difference accumulation value of an occlusion accumulation image of the previous frame; and outputting an area of a moving object based on the occlusion accumulation image.2021-11-18
20210358147THREE-DIMENSIONAL MEASUREMENT APPARATUS, THREE-DIMENSIONAL MEASUREMENT METHOD AND NON-TRANSITORY COMPUTER READABLE MEDIUM - The three-dimensional measurement apparatus includes a light projecting unit projects, onto a target, a pattern in which data is encoded, an image capturing unit captures an image of the target onto which the pattern is projected, and a calculation unit calculates positions of a three-dimensional point group based on positions of the feature points and the decoded data, in which the pattern includes unit patterns that each expresses at least two bits and are used in order to calculate the positions of the three-dimensional point group, the unit patterns each includes a first region and a second region that has an area that is larger than an area of the first region, and an area ratio between the first region and the second region is at least 0.3 and not more than 0.9.2021-11-18
20210358148METHOD FOR CORRECTED DEPTH MEASUREMENT WITH A TIME-OF-FLIGHT CAMERA USING AMPLITUDE-MODULATED CONTINUOUS LIGHT - A method for corrected depth measurement with a time-of-flight camera using amplitude-modulated continuous light. In order to enable an accurate and efficient depth measurement with a time-of-flight camera, the method includes, for each of a plurality of pixels of a sensor array of the camera: acquiring with the camera a raw depth value r2021-11-18
20210358149ANTI-SPOOFING 3D FACE RECONSTRUCTION USING INFRARED STRUCTURE LIGHT - Aspects of the present disclosure describe systems, methods, and structures for anti-spoofing 3D face reconstruction using infrared structured light that advantageously reconstructs 3D face structures for facial recognition and detect face surface material(s) such that human skin may be effectively distinguished from artifacts thereby providing additional security for facial recognition including immunity from 2D/3D print attacks including face masks and special make-ups.2021-11-18
20210358150THREE-DIMENSIONAL LOCALIZATION METHOD, SYSTEM AND COMPUTER-READABLE STORAGE MEDIUM - Systems and methods are described for three-dimensional localization using light-depth images. For example, some of the methods include accessing a light-depth image, wherein the light-depth image includes a non-visible light depth channel representing distances of objects in a scene viewed from an image capture device, and the light-depth image includes one or more visible light channels that are temporally and spatially synchronized with the depth channel; determining a set of features of the scene in a space based on the light-depth image; accessing a map data structure that includes features based on light data and position data for the objects in the space; accessing matching data derived by matching the set of features of the scene to features of the map data structure; determining a location of the image capture device relative to objects in the space based on the matching data.2021-11-18
20210358151METHOD FOR GENERATING SIMULATED POINT CLOUD DATA, DEVICE, AND STORAGE MEDIUM - A method for generating simulated point cloud data, a device, and a storage medium includes: acquiring at least one frame of point cloud data collected by a road collecting device in an actual environment without a dynamic obstacle as static scene point cloud data; setting, least one dynamic obstacle in a coordinate system matching the static scene point cloud data; simulating in the coordinate system, a plurality of simulated scanning lights emitted by a virtual scanner located at an origin of the coordinate system; updating the static scene point cloud data according to intersections of the plurality of simulated scanning lights and the at least one dynamic obstacle to obtain the simulated point cloud data comprising point cloud data of the dynamic obstacle; and at least one of adding a set noise to the simulated point cloud data, and, deleting point cloud data corresponding to the dynamic obstacle according to a set ratio.2021-11-18
20210358152MONITORING DISTANCES BETWEEN PEOPLE - Systems, and method and computer readable media that store instructions for face based distance measurements related to pandemic avoidance instructions compliance.2021-11-18
20210358153DETECTION METHODS, DETECTION APPARATUSES, ELECTRONIC DEVICES AND STORAGE MEDIA - Example detecting methods and apparatus are described. One example method includes: acquiring a two-dimensional image; and constructing, for each of one or more objects under detection in the two-dimensional image, a structured polygon corresponding to the object under detection based on the acquired two-dimensional image, wherein for each object under detection, a structured polygon corresponding to the object represents projection of a three-dimensional bounding box corresponding to the object in the two-dimensional image; for each object under detection, calculating depth information of vertices in the structured polygon based on height information of the object and height information of vertical sides of the structured polygon corresponding to the object; and determining three-dimensional spatial information of the object under detection based on the depth information of the vertices in the structured polygon and two-dimensional coordinate information of the vertices of the structured polygon in the two-dimensional image.2021-11-18
20210358154Systems and Methods for Depth Estimation Using Generative Models - Systems and methods for depth estimation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for estimating depth from images. The method includes steps for receiving a plurality of source images captured from a plurality of different viewpoints using a processing system configured by an image processing application, generating a target image from a target viewpoint that is different to the viewpoints of the plurality of source images based upon a set of generative model parameters using the processing system configured by the image processing application, and identifying depth information of at least one output image based on the predicted target image using the processing system configured by the image processing application.2021-11-18
20210358155SYSTEMS AND METHODS FOR TEMPORALLY CONSISTENT DEPTH MAP GENERATION - Systems and methods are provided for performing temporally consistent depth map generation by implementing acts of obtaining a first stereo pair of images of a scene associated with a first timepoint and a first pose, generating a first depth map of the scene based on the first stereo pair of images, obtaining a second stereo pair of images of the scene associated with at a second timepoint and a second pose, generating a reprojected first depth map by reprojecting the first depth map to align the first depth map with the second stereo pair of images, and generating a second depth map that corresponds to the second stereo pair of images using the reprojected first depth map.2021-11-18
20210358156SYSTEMS AND METHODS FOR LOW COMPUTE DEPTH MAP GENERATION - Systems and methods are provided performing for low compute depth map generation by implementing acts of obtaining a stereo pair of images of a scene, downsampling the stereo pair of images, generating a depth map by stereo matching the downsampled stereo pair of images, and generating an upsampled depth map based on the depth map using an edge-preserving filter for obtaining at least some data of at least one image of the stereo pair of images.2021-11-18
20210358157THREE-DIMENSIONAL MEASUREMENT SYSTEM AND THREE-DIMENSIONAL MEASUREMENT METHOD - A three-dimensional measurement system capable of realizing high-speed processing while increasing measurement resolution are provided. The system includes: an image capture unit including a first and second image capture units that are spaced apart; a first calculation unit calculates a parallax at first feature points in the images using distance information of a three-dimensional measurement method other than a stereo camera method or information for calculating a distance, using at least one of the first and second image capture units; and a second calculation unit calculates a parallax at second feature points based on a corresponding point for the second feature point by using the stereo camera method using the first and second image capture units, and specifies a three-dimensional shape based on the parallax at the first and second feature points. The second calculation unit sets a search area based on the parallax at the first feature points.2021-11-18
20210358158SYSTEM AND METHOD FOR DEPTH MAP RECOVERY - A method for reconstructing a downsampled depth map includes receiving, at an electronic device, image data to be presented on a display of the electronic device at a first resolution, wherein the image data includes a color image and the downsampled depth map associated with the color image. The method further includes generating a high resolution depth map by calculating, for each point making up the first resolution, a depth value based on a normalized pose difference across a neighborhood of points for the point, a normalized color texture difference across the neighborhood of points for the point, and a normalized spatial difference across the neighborhood of points. Still further, the method includes outputting, on the display, a reprojected image at the first resolution based on the color image and the high resolution depth map. The downsampled depth map is at a resolution less than the first resolution.2021-11-18
20210358159IMAGE PROCESSING APPARATUS, IMAGING APPARATUS, MOBILE BODY, AND IMAGE PROCESSING METHOD - An image processing apparatus 2021-11-18
20210358160METHOD AND SYSTEM FOR DETERMINING PLANT LEAF SURFACE ROUGHNESS - Provided is a method and system for determining plant leaf surface roughness. The method includes: acquiring a plurality of continuously captured zoomed-in leaf images by using a zoom microscope image capture system; determining a feature match set according to the zoomed-in leaf images; removing de-noised images of which the number of feature matches in feature match set is less than a second set threshold, to obtain n screened images; combining the n screened images to obtain a combined grayscale image; and determining plant leaf surface roughness according to the combined grayscale image. In the present disclosure, first, a plurality of zoomed-in leaf images are directly acquired by the zoom microscope image capture system quickly and accurately; the zoomed-in leaf images are then screened and combined to form a combined grayscale image; finally, three-dimensional roughness of a plant leaf surface is determined quickly and accurately according to the combined grayscale image.2021-11-18
20210358161STATIONARY AERIAL PLATFORM SCANNING SYSTEM UTILIZING MEASURED DISTANCES FOR THE CONSTRUCTION, DISPLAYING TERRAIN FEATURES AND LOCATION OF OBJECTS - A system and method for rendering terrain features and distance details on a display is disclosed. This includes a computer-implemented distance measuring apparatus integrated in an aerial stationary platform used for terrain scanning and a computer to control the operation and to display the organized and calculated data. The collected scanned data and the determining of terrain coordinates provides details that can be utilized during the broadcasting of a golf sporting event to highlight aspects of the contest. The rendering and the details can be superimposed on the camera video during the broadcast or be utilized by the event sport broadcasters as additional points that need to be highlighted during a player putting process.2021-11-18
20210358162IMAGE PROCESSING METHOD AND IMAGE CAPTURING APPARATUS - An image processing method includes acquiring a plurality of image data items by continuously capturing images of an object by an image capturing apparatus (S2021-11-18
20210358163LOCALIZATION OF ELEMENTS IN THE SPACE - A method for localizing, in a space containing at least one determined object, an object element associated to a particular 2D representation element in a determined 2D image of the space, may have: deriving a range or interval of candidate spatial positions for the imaged object element on the basis of predefined positional relationships); restricting the range or interval of candidate spatial positions to at least one restricted range or interval of admissible candidate spatial positions, wherein restricting includes at least one of: limiting the range or interval of candidate spatial positions using at least one inclusive volume surrounding at least one determined object; and limiting the range or interval of candidate spatial positions using at least one exclusive volume surrounding non-admissible candidate spatial positions; and retrieving, among the admissible candidate spatial positions of the restricted range or interval, a most appropriate candidate spatial position on the basis of similarity metrics.2021-11-18
20210358164CONTENT-AWARE STYLE ENCODING USING NEURAL NETWORKS - Apparatuses, systems, and techniques to facilitate application of a style, for which one or more neural networks have not been trained by a training framework, from one image to content of another image. In at least one embodiment, a styled output image is generated by one or more neural networks based on a style contained in a style image and content of a content image where said one or more neural networks have not been trained by a training framework on said style.2021-11-18
20210358165SYSTEM, METHOD, AND TARGET FOR WAFER ALIGNMENT - A wafer alignment system includes an imaging sub-system, a controller, and a stage. The controller receives image data for reference point targets and determines a center location for each of the reference point targets. The center location determination includes identifying sub-patterns within a respective reference point target and identifying multiple center location candidates for the respective reference point target. The step of identifying the multiple center location candidates for the respective reference point target includes: applying a model to each identified sub-pattern of the respective reference point target, wherein the model generates a hotspot for each sub-pattern that identifies a center location candidate for the respective reference point target. The controller is further configured to determine a center location for the respective reference point target based on the multiple center location candidates and determine an orientation of the wafer based on the center location determination for the reference point targets.2021-11-18
20210358166METHODS, APPARATUSES, SYSTEMS, AND STORAGE MEDIA FOR LOADING VISUAL LOCALIZATION MAPS - The present disclosure provides a method, device and system for loading a visual localization map, a storage medium, and a visual localization method. The loading method may include: localizing a current pose; directly predicting, based on the current pose, a set of group numbers to be loaded for the visual localization map, wherein each group number in the set of group numbers to be loaded corresponds to a sub-map file of the visual localization map, wherein the visual localization map includes a master map file and a plurality of sub-map files, wherein the plurality of sub-map files respectively store map data of corresponding groups obtained by grouping the visual localization map based on key frames, and wherein key frame index information for indexing the plurality of sub-map files is stored in the master map file; and loading corresponding sub-map files based on the group numbers in the set of group numbers to be loaded. In the above solution, by predicting the sub-map files to be used and loading the same in advance, the wait time for loading of the sub-map files to be used is eliminated, thereby ensuring the instantaneity of visual localization.2021-11-18
20210358167ASSESSING VISIBILITY OF A TARGET OBJECT WITH AUTONOMOUS VEHICLE FLEET - A system uses a fleet of AVs to assess visibility of target objects. Each AV has a camera for capturing images of target objects. AVs provide the captured images, or visibility data derived from the captured images, to a remote system, which aggregates visibility data describing images captured across the fleet of AVs. The AVs also provide condition data describing conditions under which the images were captured, and the remote system aggregates the condition data. The remote system processes the aggregated visibility data and condition data to determine conditions under which a target object does not meet a visibility threshold.2021-11-18
20210358168GENERATING A MEDICAL RESULT IMAGE - A method is for generating a medical result image using a current image, a target image and a reference image. All images depict at least partially the same body region of a patient. In an embodiment, the method includes defining at least one image segment within the target image; registering the reference image with the target image by establishing a registration matrix for each image segment within the target image, the respective registration matrix being specific for the respective image segment; detecting a position of a surgical instrument in the current image, the position corresponding to an image segment of the target image; and generating the medical result image by fusing the current image and the reference image using the registration matrix assigned to the image segment according to the position of the surgical instrument within the current image.2021-11-18
20210358169METHOD, APPARATUS, ELECTRONIC DEVICE AND COMPUTER READABLE MEDIUM FOR CALIBRATING EXTERNAL PARAMETER OF CAMERA - A method and an apparatus for calibrating an external parameter of a camera are provided. The method may include: acquiring a time-synchronized data set of three-dimensional point clouds and two-dimensional image of a calibration reference object, the two-dimensional image being acquired by a camera with a to-be-calibrated external parameter; establishing a transformation relationship between a point cloud coordinate system and an image coordinate system, the transformation relationship including a transformation parameter; back-projecting the data set of the three-dimensional point clouds onto a plane where the two-dimensional image is located through the transformation relationship to obtain a set of projection points of the three-dimensional point clouds; adjusting the transformation parameter to map the set of the projection points onto the two-dimensional image; and obtaining an external parameter of the camera based on the adjusted transformation parameter and the data set of the three-dimensional point clouds.2021-11-18
20210358170DETERMINING CAMERA PARAMETERS FROM A SINGLE DIGITAL IMAGE - The present disclosure relates to systems, non-transitory computer-readable media, and methods for utilizing a critical edge detection neural network and a geometric model to determine camera parameters from a single digital image. In particular, in one or more embodiments, the disclosed systems can train and utilize a critical edge detection neural network to generate a vanishing edge map indicating vanishing lines from the digital image. The system can then utilize the vanishing edge map to more accurately and efficiently determine camera parameters by applying a geometric model to the vanishing edge map. Further, the system can generate ground truth vanishing line data from a set of training digital images for training the critical edge detection neural network.2021-11-18
20210358171EXTRACTING COLOR FROM ITEM IMAGES - A system including one or more processors and one or more non-transitory computer-readable media storing computing instructions configured to run on the one or more processors and perform: obtaining an image of an item; removing background pixels from the image by removing white pixels from the image up to a first threshold; determining an item outline of the item in the image, wherein the item outline comprises aliased pixels along a periphery of the item in the image; removing grey pixels from the item outline in the image up to a second threshold to create a first updated image; removing shadows from the first updated image to create a second updated image based on a saliency map and a third threshold for shadow-like grey pixels; mapping each pixel in the second updated image to a respective mapped color in a predetermined color palette; and determining one or more dominant colors of the respective mapped colors based on one or more highest respective percentages of the respective mapped colors. Other embodiments are disclosed.2021-11-18
20210358172METHOD AND DEVICE FOR DETERMINING THE HOMOGENEITY OF SKIN COLOR - In various embodiments, a method for determining a homogeneity of complexion is provided. The method may include provision of a digital image on which skin is portrayed and which is parameterized in a color space which is defined by a parameter set in which one of the parameters is a hue, identifying and/or defining at least one skin examination area in the transformed image, calculating a hue distribution in the at least one skin examination area, and determining at least one homogeneity value for the complexion based on the calculated hue distribution.2021-11-18
20210358173COMPUTATIONALLY EFFICIENT METHOD FOR COMPUTING A COMPOSITE REPRESENTATION OF A 3D ENVIRONMENT - Methods and apparatus for providing a representation of an environment, for example, in an XR system, and any suitable computer vision and robotics applications. A representation of an environment may include one or more planar features. The representation of the environment may be provided by jointly optimizing plane parameters of the planar features and sensor poses that the planar features are observed at. The joint optimization may be based on a reduced matrix and a reduced residual vector in lieu of the Jacobian matrix and the original residual vector.2021-11-18
20210358174METHOD AND APPARATUS OF DATA COMPRESSION - A method and apparatus for processing color data includes storing fragment pointer and color data together in a color buffer. A delta color compression (DCC) key indicating the color data to fetch for processing is stored, and the fragment pointer and color data is fetched based upon the read DCC key for decompression.2021-11-18
20210358175THREE-DIMENSIONAL DATA MULTIPLEXING METHOD, THREE-DIMENSIONAL DATA DEMULTIPLEXING METHOD, THREE-DIMENSIONAL DATA MULTIPLEXING DEVICE, AND THREE-DIMENSIONAL DATA DEMULTIPLEXING DEVICE - A three-dimensional data multiplexing method includes: multiplexing pieces of data of a plurality of types including point cloud data to generate an output signal having a file configuration that is predetermined; and storing, in metadata in the file configuration, information indicating a type of each of the pieces of data included in the output signal.2021-11-18
20210358176IMAGE PROCESSING APPARATUS AND METHOD - The present disclosure relates to image processing apparatus and method that can prevent a reduction in image quality. Geometry data that is a frame image having arranged thereon a projected image obtained by projecting 3D data representing a three-dimensional structure on a two-dimensional plane and includes a special value indicating occupancy map information in a range is generated. The generated geometry data is encoded. Further, the encoded data on the geometry data is decoded, and a depth value indicating a position of the 3D data and the occupancy map information are extracted from the decoded geometry data. The present disclosure is applicable to, for example, an information processing apparatus, an image processing apparatus, electronic equipment, an information processing method, or a program.2021-11-18
20210358177GENERATING MODIFIED DIGITAL IMAGES UTILIZING A GLOBAL AND SPATIAL AUTOENCODER - The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.2021-11-18
20210358178ITERATIVE MEDIA OBJECT COMPRESSION ALGORITHM OPTIMIZATION USING DECOUPLED CALIBRATION OF PERCEPTUAL QUALITY ALGORITHMS - One or more multi-stage optimization iterations are performed with respect to a compression algorithm. A given iteration comprises a first stage in which hyper-parameters of a perceptual quality algorithm are tuned independently of the compression algorithm. A second stage of the iteration comprises tuning hyper-parameters of the compression algorithm using a set of perceptual quality scores generated by the tuned perceptual quality algorithm. The final stage of the iteration comprises performing a compression quality evaluation test on the tuned compression algorithm.2021-11-18
20210358179METHOD AND APPARATUS FOR FEATURE SUBSTITUTION FOR END-TO-END IMAGE COMPRESSION - A method of feature substitution for end-to-end image compression, is performed by at least one processor and includes encoding an input image, using a first neural network, to generate an encoded representation, and quantizing the generated encoded representation, using a second neural network, to generate a compressed representation. The first neural network and the second neural network are trained by determining a rate loss, based on a bitrate of the generated compressed representation, and updating the generated encoded representation, based on the determined rate loss.2021-11-18
20210358180DATA COMPRESSION USING INTEGER NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reliably performing data compression and data decompression across a wide variety of hardware and software platforms by using integer neural networks. In one aspect, there is provided a method for entropy encoding data which defines a sequence comprising a plurality of components, the method comprising: for each component of the plurality of components: processing an input comprising: (i) a respective integer representation of each of one or more components of the data which precede the component in the sequence, (ii) an integer representation of one or more respective latent variables characterizing the data, or (iii) both, using an integer neural network to generate data defining a probability distribution over the predetermined set of possible code symbols for the component of the data.2021-11-18
20210358181DISPLAY DEVICE AND DISPLAY CONTROL METHOD - The present technology relates to a display device and a display control method that enable improvement of user experience. Provided is a display device including: a control unit that, when displaying a video corresponding to an image frame obtained by capturing a user on a display unit, controls luminance of a lighting region including a first region including the user in the image frame and at least a part of a second region of the second region excluding the first region to cause the lighting region to function as a light that emits light to the user. The present technology can be applied, for example, to a television receiver.2021-11-18
20210358182SYSTEM AND METHOD FOR COLOR MAPPING FOR IMPROVED VIEWING BY A COLOR VISION DEFICIENT OBSERVER - A method and system for color mapping digital visual content for improved viewing by a color vision deficient observer includes receiving the digital visual content to be color mapped, clustering color values of the digital visual content into a plurality of color clusters, assigning each color cluster to a respective one of a set of target color values in which the set of target color values have increased visual distinguishability for the color vision deficient observer; and for each color cluster, mapping the color values of the color cluster to the target color value, thereby generating a color-mapped digital visual content. One or more regions of interest of the content can be identified and the color mapping may be applied onto to those regions of interest.2021-11-18
20210358183Systems and Methods for Multi-Kernel Synthesis and Kernel Conversion in Medical Imaging - Systems and methods are provided for synthesizing information from multiple image series of different kernels into a single image series, and also for converting a single baseline image series of a kernel reconstructed by a CT scanner to image series of various other kernels, using deep-learning based methods. For multi-kernel synthesis, a single set of images with desired high spatial resolution and low image noise can be synthesized from multiple image series of different kernels. The synthesized kernel is sufficient for a wide variety of clinical tasks, even in circumstances that would otherwise require many separate image sets. Kernel conversion may be configured to generate images with arbitrary reconstruction kernels from a single baseline kernel. This would reduce the burden on the CT scanner and the archival system, and greatly simplify the clinical workflow.2021-11-18
20210358184SYSTEMS AND METHODS FOR REPRESENTING OBJECTS USING A SIX-POINT BOUNDING BOX - System, methods, and other embodiments described herein relate to improving a representation of objects in a surrounding environment. In one embodiment, a method includes, in response to receiving sensor data depicting the surrounding environment including a corridor that defines a left boundary and a right boundary, identifying at least one object from the sensor data. The method includes transforming segmented data from the sensor data that represents the object into a bounding box by defining the bounding box according to six points relative to the corridor. The method includes providing the six points of the bounding box as a reduced representation of the object.2021-11-18
20210358185DATA REDUCTION FOR GENERATING HEAT MAPS - Techniques of collecting and displaying data include mapping user interaction data having multiple components (or, dimensions) to a plurality of buckets representing a set of values of each of the components. When a user causes a computer to generate user interaction data by interacting with an object on an electronic display, the computer performs a mapping of the many components of the user interaction data to a plurality of buckets. Each bucket represents a set of values of the user interaction data. The number of buckets is far smaller than the number of possible data points. Accordingly, rather than individual, multidimensional data points being transmitted to another computer that compiles the user interaction data into heat maps, a relatively small number of bucket identifiers are transmitted. In this way, the analysis of the user interaction data requires minimal resources and can take place in real time.2021-11-18
20210358186DYNAMIC VISUALIZATION AND DATA ANALYTICS BASED ON PARAMETER ACTIONS - Embodiments are directed to managing visualizations of data using a network computer. A modeling engine may provide a data model that includes a plurality of data objects and a display model that includes a plurality of display objects based on the plurality of data objects. Parameter action objects may be associated with display objects in the display model. In response to an activation of the parameter action objects, values associated with display objects may be assigned to the parameter associated with a parameter action object and actions associated with the parameter may be executed to modify the display model.2021-11-18
20210358187DISPLAY THAT USES A LIGHT SENSOR TO GENERATE ENVIRONMENTALLY MATCHED ARTIFICIAL REALITY CONTENT - A display assembly generates environmentally matched virtual content for an electronic display. The display assembly includes a display controller and a display. The display controller is configured to estimate environmental matching information for a target area within a local area based in part on light information received from a light sensor. The target area is a region for placement of a virtual object. The light information describes light values. The display controller generates display instructions for the target area based in part on a human vision model, the estimated environmental matching information, and rendering information associated with the virtual object. The display is configured to present the virtual object as part of artificial reality content in accordance with the display instructions. The color and brightness of the virtual object is environmentally matched to the portion of the local area surrounding the target area.2021-11-18
20210358188CONVERSATIONAL AI PLATFORM WITH RENDERED GRAPHICAL OUTPUT - In various examples, a virtually animated and interactive agent may be rendered for visual and audible communication with one or more users with an application. For example, a conversational artificial intelligence (AI) assistant may be rendered and displayed for visual communication in addition to audible communication with end-users. As such, the AI assistant may leverage the visual domain—in addition to the audible domain—to more clearly communicate with users, including interacting with a virtual environment in which the AI assistant is rendered. Similarly, the AI assistant may leverage audio, video, and/or text inputs from a user to determine a request, mood, gesture, and/or posture of a user for more accurately responding to and interacting with the user.2021-11-18
20210358189Advanced Systems and Methods for Automatically Generating an Animatable Object from Various Types of User Input - Dynamically customized animatable 3D models of virtual characters (“avatars”) are generated in real time from multiple inputs from one or more devices having various sensors. Each input may comprise a point cloud associated with a user's face/head. An example method comprises receiving inputs from sensor data from multiple sensors of the device(s) in real time, and pre-processing the inputs for determining orientation of the point clouds. The method may include registering the point clouds to align them to a common reference; automatically detecting features of the point clouds; deforming a template geometry based on the features to automatically generate a custom geometry; determining a texture of the inputs and transferring the texture to the custom geometry; deforming a template control structure based on the features to automatically generate a custom control structure; and generating an animatable object having the custom geometry, the transferred texture, and the custom control structure.2021-11-18
20210358190METHOD OF GENERATING ITEM FOR AVATAR, RECORDING MEDIUM AND COMPUTING DEVICE - Methods of generating item for an avatar, recording mediums having recorded thereon a program that when executed by a processor, causes a computing device to execute such methods, and computing devices for implementing such methods may be provided. The method including extracting a target item selected by a user device from an image, classifying the extracted target item into a category, providing a template of the target item in association with the category, obtaining a style attribute of the extracted target item from the image, and generating a virtual item to be applied to the avatar based on a template, the modified template created by adding the obtained style attribute to the template.2021-11-18
20210358191PRECISION MODULATED SHADING - A GPU is disclosed, which may include a VRS interface to provide spatial information and/or primitive-specific information. The GPU may include one or more shader cores including a control logic section to determine a shading precision value based on the spatial information and/or the primitive-specific information. The control logic section may modulate a shading precision according to the shading precision value. A method for controlling shading precision by a GPU may include providing, by a VRS interface, the spatial information and/or primitive-specific information. The method may include determining, by a control logic section, a shading precision value based on the spatial information and/or the primitive-specific information. The method may include modulating a shading precision according to the shading precision value.2021-11-18
20210358192RENDERING METHOD AND APPARATUS - Embodiments of this application provide a rendering method and apparatus, and the like. The method includes: A processor (which is usually a CPU) modifies a rendering instruction based on a relationship between a first frame buffer and a second frame buffer, so that a GPU renders a rendering job corresponding to the first frame buffer to the second frame buffer based on a new rendering instruction. In this application, render passes of one or more frame buffers are redirected to another frame buffer. In this way, memory occupation in a rendering process of an application program is effectively reduced, bandwidth of the GPU is reduced, and power consumption can be reduced.2021-11-18
20210358193GENERATING AN IMAGE FROM A CERTAIN VIEWPOINT OF A 3D OBJECT USING A COMPACT 3D MODEL OF THE 3D OBJECT - A method for generating an image from a certain viewpoint of an object that is three dimensional, the method comprises: rendering an image of the object, based on a compact 3D model of the object and at least one two-dimensional (2D) texture map associated with the certain viewpoint.2021-11-18
20210358194GENERATING A 3D VISUAL REPRESENTATION OF THE 3D OBJECT USING A NEURAL NETWORK SELECTED OUT OF MULTIPLE NEURAL NETWORKS - A method for generating a three dimensional (3D) visual representation of a sensed object that is three dimensional, the method comprises obtaining at least one 3D visual representation parameter, the visual representation parameters is selected out of a size parameter, a resolution parameter, and a resource consumption parameter; obtaining object information that represents the sensed object; selecting, based on the at least one parameter, a neural network for generating the visual representation of the sensed object; and generating the 3D visual representation of the 3D object by the selected neural network.2021-11-18
20210358195HYBRID TEXTURE MAP TO BE USED DURING 3D VIDEO CONFERENCING - A method for generating a texture map used during a video conference, the method may include obtaining multiple texture maps of multiple areas of at least a portion of a three-dimensional (3D) object; wherein the multiple texture maps comprise a first texture map of a first area and of a first resolution, and a second texture map of a second area and of a second resolution, wherein the first area differs from the first area and the first resolution differs from the second resolution; and generating a texture map of the at least portion of the 3D object, the generating is based on the multiple texture maps; and utilizing the visual representation of the at least portion of the 3D object based on the texture map of the at least portion of the 3D object during the video conference.2021-11-18
20210358196METHOD AND SYSTEM FOR DIFFUSING COLOR ERROR INTO ADDITIVE MANUFACTURED OBJECTS - A method of processing data for additive manufacturing of a 3D object comprises: receiving graphic elements defining a surface of the object, and an input color texture to be visible over a surface of the object; transforming the elements to voxelized computer object data; constructing a 3D color map having a plurality of pixels, each being associated with a voxel and being categorized as either a topmost pixel or an internal pixel. Each topmost pixel is associated with a group of internal pixels forming a receptive field for the topmost pixel. A color-value is assigned to each topmost pixel and each internal pixel of a receptive field associated with the topmost pixel, based on the color texture and according to a subtractive color mixing. A material to be used during the additive manufacturing is designated based on the color-value.2021-11-18
20210358197TEXTURED NEURAL AVATARS - The present invention relates generally to the field of computer vision and computer graphics to produce full body renderings of a person for varying person pose and camera positions and, in particular, to a system and method for synthesizing 2-D image of a person. The method for synthesizing 2-D image of a person comprises: receiving (S2021-11-18
20210358198USING DIRECTIONAL RADIANCE FOR INTERACTIONS IN PATH TRACING - Disclosed approaches provide for interactions of light transport paths in a virtual environment to share directional radiance when rendering a scene. Directional radiance that may be shared includes outgoing directional radiance of interactions, incoming directional radiance of interactions, and/or information derived therefrom. The shared directional radiance may be used for various purposes, such as computing lighting contributions at one or more interactions of a light transport path, and/or for path guiding. Directional radiance of an interaction may be shared with another interaction when the interaction is sufficiently similar (e.g., in radiance direction) to serve as an approximation of a sample for the other interaction. Sharing directional radiance may provide for online learning of directional radiance, which may build finite element approximations of light fields at the interactions.2021-11-18
20210358199POSITION-BASED MEDIA PIPELINE FOR VOLUMETRIC DISPLAYS - Position based media pipeline systems and methods for volumetric displays provide content to a volumetric display having at least two pixels arranged in a 3D coordinate space. A three-dimensional (3D) pixel position dataset and a 3D animation are provided and a first volume representation based on the 3D animation is created. A second volume is created based on the first volume and including color data. A texture atlas is assembled based on the second volume and volumetric image data is generated based on the texture atlas. The position based media pipeline outputs the volumetric image data to one or more graphic controllers. The volumetric image data can be output whereby a user can preview the volumetric image data in addition to output to the volumetric display.2021-11-18
20210358200RENDERING METHOD, COMPUTER PRODUCT AND DISPLAY APPARATUS - The present disclosure relates to an image rendering method for a computer product coupled to a display apparatus. The image rendering method may include rendering an entire display region of the display apparatus with a first rendering mode to generate a first rendering mode sample image, determining a target region in the entire display region, rendering the target region with a second rendering mode to generate a second rendering mode sample image, and transmitting data of the first rendering mode sample image and the second rendering mode sample image. The second rendering mode comprises at least a value of an image rendering feature that is higher than that of the first rendering mode.2021-11-18
20210358201CONSTRUCTION VISUALIZATION SYSTEMS AND METHODS - A construction visualization device generates a digital model of a structure for construction in a physical space. Notably, the digital model includes at least one model marker and the physical space includes at least one physical marker. The device also determines a viewing orientation of the digital model for display relative to the physical space based on the at least one model marker and the at least one physical marker. In addition, the device identifies a model position of a model part that corresponds to a physical position of a physical part of the structure, and displays at least a portion of the digital model based on the viewing orientation to indicate the model position of the model part.2021-11-18
20210358202Room Labeling Drawing Interface for Activity Tracking and Detection - Exemplary embodiments include an intelligent secure networked architecture configured by at least one processor to execute instructions stored in memory, the architecture comprising a data retention system and a machine learning system, a web services layer providing access to the data retention and machine learning systems, an application server layer that provides a user-facing application that accesses the data retention and machine learning systems through the web services layer and performs processing based on user interaction with an interactive graphical user interface provided by the user-facing application, the user-facing application configured to execute instructions for a method for room labeling for activity tracking and detection, the method including making a 2D sketch of a first room on an interactive graphical user interface, and using machine learning to turn the 2D sketch of the first room into a 3D model of the first room.2021-11-18
20210358203Reflection Rendering in Computer Generated Environments - Methods and systems for remote rendering of extended reality (XR) objects are described herein. A server may receive an image of a physical environment. The image may include different views of the physical environment around a client device. The server may render at least one surface of a virtual object based on the different views of the physical environment. The at least one surface may include a reflection of another object of the physical environment from a view point of the client device at the time the image was taken. The server may generate graphics including the rendered at least one surface. The server may send the generated graphics to the client device to enable display of a computer generated environment on the client device. The computer generated environment may include the at least one virtual object with an appearance of a reflective surface.2021-11-18
20210358204IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM - A virtual viewpoint foreground image generating unit generates a virtual viewpoint foreground image, which is an image of a foreground object seen from a virtual viewpoint without a shadow, based on received multi-viewpoint images and a received virtual viewpoint parameter. A virtual viewpoint background image generating unit generates a virtual viewpoint background image, which is an image of a background object seen from the virtual viewpoint, based on the received multi-viewpoint images and virtual viewpoint parameter. A shadow mask image generating unit generates shadow mask images from the received multi-viewpoint images. A shadow-added virtual viewpoint background image generating unit renders a shadow in the virtual viewpoint background image based on the received virtual viewpoint background image, shadow mask images, and virtual viewpoint parameter. A combined image generating unit generates a virtual viewpoint image by combining the virtual viewpoint foreground image with the shadow-added virtual viewpoint background image.2021-11-18
20210358205IMAGE-COMPARISON BASED ANALYSIS OF SUBSURFACE REPRESENTATIONS - 2D slices/images may be extracted from a three-dimensional volume of subsurface data. Image comparison analysis across sequential 2D slices/images may identify boundaries within the corresponding subsurface region, such as changes in style of deposition or reservoir property distribution. Identification of temporal/spatial boundaries in the subsurface region where subsurface properties change may facilitate greater understanding of the scales and controls on heterogeneity, and connectivity between different locations.2021-11-18
20210358206UNMANNED AERIAL VEHICLE NAVIGATION MAP CONSTRUCTION SYSTEM AND METHOD BASED ON THREE-DIMENSIONAL IMAGE RECONSTRUCTION TECHNOLOGY - An unmanned aerial vehicle navigation map construction system based on three-dimensional image reconstruction technology comprises an unmanned aerial vehicle, a data acquiring component and a three-dimensional navigation map construction system, wherein the three-dimensional navigation map construction system comprises an image set input system, a feature point extraction system, a sparse three-dimensional point cloud reconstruction system, a dense three-dimensional point cloud reconstruction system, a point cloud model optimization system and a three-dimensional navigation map reconstruction system. A scene image set is input into the three-dimensional navigation map construction system, feature point detection is carried out on all images, a sparse point cloud model of the scene and a dense point cloud model of the scene are reconstructed, the model is optimized by removing a miscellaneous point and reconstructing the surface, and a three-dimensional navigation map of the scene is reconstructed.2021-11-18
20210358207Method for Preserving Shapes in Solid Model When Distributing Material During Topological Optimization With Distance Fields - A method preserves shapes in a solid model when distributing material during topological optimization. A 3D geometric model of a part having a boundary shape is received. The geometric model is pre-processed to produce a variable-void distance field and to produce a frozen distance field representing the boundary shape. The geometric model is apportioned into a plurality of voxels, and a density value is adjusted for each voxel according to an optimization process. An iso-surface mesh is extracted from the voxel data, and an iso-surface distance field is generated from the extracted iso-surface mesh. A distance field intersection is derived between the iso-surface distance field and the variable-void distance field. A distance field union is performed between the distance field intersection and the frozen distance field, and a result iso-surface mesh is produced from the distance field union.2021-11-18
20210358208METHOD, DEVICE AND COMPUTER READABLE STORAGE MEDIUM FOR GENERATING VOLUME FOLIATION - The present invention discloses a method, device and computer readable storage medium (CRSM) for generating a volume foliation. The method comprises: constructing a pants decomposition graph on a smooth closed surface S of genus g>1, wherein the surface S has g handles, and the pants are a genus-zero triangular mesh surface with g boundaries; constructing an initial mapping f2021-11-18
20210358209METHOD AND DEVICE FOR DETERMINING PLURALITY OF LAYERS OF BOUNDING BOXES, COLLISION DETECTION METHOD AND DEVICE, AND MOTION CONTROL METHOD AND DEVICE - A method for determining a plurality of layers of bounding boxes of an object includes determining a polyhedron capable of accommodating the object therein as a first-layer bounding box. The method also includes selecting one vertex from a plurality of vertices of the first-layer bounding box as a target vertex, and determining, by processing circuitry of a computing device, a support plane of the object. The support plane has a normal vector that has a specific direction corresponding to the target vertex and that is closest to the target vertex, the support plane being a plane passing through a point on a surface of the object such that the object is completely located on one side of the support plane. The method further includes cutting the first-layer bounding box based on at least the support plane to form a smaller bounding box, as a second-layer bounding box.2021-11-18
20210358210Method for Preserving Shapes in Solid Model When Distributing Material During Topological Optimization - A method preserves shapes in a solid model when distributing material during topological optimization. A 3D geometric model of a part having a boundary shape is received. The geometric model is pre-processed to produce a variable-void mesh and to produce a frozen mesh representing the boundary shape. The geometric model is apportioned into a plurality of voxels, and a density value is adjusted for each voxel according to an optimization process. An iso-surface mesh is extracted from the voxel data, and a mesh Boolean intersection is derived between the extracted iso-surface mesh and the variable-void mesh. A mesh Boolean union between the mesh Boolean intersection and the frozen mesh.2021-11-18
202103582113D OBJECT ACQUISITION METHOD AND APPARATUS USING ARTIFICIAL LIGHT PHOTOGRAPH - Disclosed are herein a three-dimensional (3D) object acquisition method and an object acquisition apparatus using artificial light photographs. A 3D object acquisition method according to an embodiment of the present invention includes receiving a plurality of images of a 3D object photographed by a camera, reconstructing spatially-varying bidirectional reflectance distribution functions for the 3D object based on the plurality of images received, estimating shading normals for the 3D object based on the reconstructed spatially-varying bidirectional reflectance distribution functions, and acquiring 3D geometry for the 3D object based on the estimated shading normals.2021-11-18
20210358212Reinforced Differentiable Attribute for 3D Face Reconstruction - Techniques performed by a data processing system for reconstructing a three-dimensional (3D) model of the face of a human subject herein include obtaining source data comprising a two-dimensional (2D) image, three-dimensional (3D) image, or depth information representing a face of a human subject. Reconstructing the 3D model of the face also includes generating a 3D model of the face of the human subject based on the source data by analyzing the source data to produce a coarse 3D model of the face of the human subject, and refining the coarse 3D model through free form deformation to produce a fitted 3D model. The coarse 3D model may be a 3D Morphable Model (3DMM), and the coarse 3D model may be refined through free-form deformation in which the deformation of the mesh is limited by applying an as-rigid-as-possible (ARAP) deformation constraint.2021-11-18
20210358213METHOD, APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM FOR POINT CLOUD DATA PROCESSING - A method, an electronic device and a readable storage medium for point cloud data processing, which may be used for autonomous driving, are disclosed. The feature vectors of respective points in the first point cloud data and second point cloud data are pre-learned, and thus the feature vectors of the first key points may be determined directly based on the learnt second feature vectors of respective first neighboring points of the respective first key points in the first point cloud data, and the feature vectors of the candidate key points may be determined directly based on the learnt third feature vectors of the respective second neighboring points of respective candidate key points in the second point cloud data corresponding to the first key points.2021-11-18
20210358214MATCHING MESHES FOR VIRTUAL AVATARS - Examples of systems and methods for matching a base mesh to a target mesh for a virtual avatar or object are disclosed. The systems and methods may be configured to automatically match a base mesh of an animation rig to a target mesh, which may represent a particular pose of the virtual avatar or object. Base meshes may be obtained by manipulating an avatar or object into a particular pose, while target meshes may be obtain by scanning, photographing, or otherwise obtaining information about a person or object in the particular pose. The systems and methods may automatically match a base mesh to a target mesh using rigid transformations in regions of higher error and non-rigid deformations in regions of lower error.2021-11-18
202103582153D PAINT EFFECTS IN A MESSAGING SYSTEM - Systems and methods are provided for determining a location of a selection in a space viewable in a camera view on a display of a computing device, detecting movement of the computing device, and generating a path based on the location of the selection and the movement of the computing device. The systems and methods further provide for generating a three-dimensional (3D) mesh along the path, populating the 3D mesh with selected options to generate a 3D paint object, and causing the generated 3D paint object to be displayed. The systems and methods further provide for receiving a request to send a message comprising an image or video overlaid by the 3D paint object, capturing the image or video overlaid by the displayed 3D paint object, generating the message comprising the image or video overlaid by the 3D paint object, and sending the message to another computing device.2021-11-18
20210358216SYSTEM AND METHOD FOR OPTIMIZING THE RENDERING OF DYNAMICALLY GENERATED GEOMETRY - Particular embodiments described herein present a technique for mesh simplification. A computing system may receive a request to render an image of a virtual scene including a virtual object. The system may determine one or more positions of the virtual object relative to one or more of a foveal focus point or a lens, respectively. The system may determine a screen coverage size of the virtual object. The system may then determine a simplification level for the virtual object based on the determined position(s) and the screen coverage size of the virtual object. The system may generate a mesh representation of the virtual object based on the determined simplification level, where the number of polygons used in the mesh representation depends on the determined simplification level. The system may render the image of the virtual scene using at least the generated mesh representation of the virtual object.2021-11-18
20210358217MULTI-PASS OBJECT RENDERING USING A THREE-DIMENSIONAL GEOMETRIC CONSTRAINT - A device for performing multi-pass object rendering using a three-dimensional geometric constraint may include at least one processor configured to receive a mesh of points corresponding to a head of a user. The at least one processor may be further configured to render an image of a sphere and to render elements corresponding to facial features based at least in part on the mesh of points. The at least one processor may be further configured to render an element visibility mask based at least in part on the mesh of points, the element visibility mask being constrained to the surface of the sphere. The at least one processor may be further configured to composite the sphere, the elements, and the element visibility mask to generate an output image. The at least one processor may be further configured to provide the output image for display.2021-11-18
20210358218360 VR VOLUMETRIC MEDIA EDITOR - A method includes obtaining medical images of the internal anatomy of a particular patient; preparing a three dimensional virtual model of the patient; generating a virtual reality environment using said virtual model of the patient to provide a realistic three dimensional images of actual tissues of the patient; providing an interface to receive user input defining a path through the internal anatomy of the patient within the virtual reality environment to capture various perspectives of the realistic three dimensional images of the internal anatomy of actual tissues of the patient; and generating a patient video capturing the defined path through the internal anatomy of the patient within the virtual reality environment, said patient video showing views of various perspectives of the realistic three dimensional images of the internal anatomy of actual tissues of the patient, said patient video being configured to play on a general purpose computing device.2021-11-18
20210358219METHODS AND APPARATUS FOR ATLAS MANAGEMENT OF AUGMENTED REALITY CONTENT - The present disclosure relates to methods and apparatus for graphics processing. The apparatus can determine an eye-buffer including one or more bounding boxes associated with rendered content in a frame. The apparatus can also generate an atlas based on the eye-buffer, the atlas including one or more patches associated with the one or more bounding boxes. Additionally, the apparatus can communicate the atlas including the one or more patches. The apparatus can also calculate an amount of user motion associated with the rendered content in the frame. Further, the apparatus can determine a size of each of the one or more bounding boxes based on the calculated amount of user motion. The apparatus can also determine a size and location of each of the one or more patches in the atlas.2021-11-18
20210358220ADAPTING AN AUGMENTED AND/OR VIRTUAL REALITY - The disclosure relates to a display apparatus for displaying an augmented and/or virtual reality. The display apparatus has a sensor unit configured to capture a user input. The display apparatus is further configured to receive a medical planning dataset and receive a medical image dataset having at least one projection mapping. The display apparatus is further configured to generate and display the augmented and/or virtual reality based on the planning dataset and the at least one projection mapping. The display apparatus is further configured to adapt a virtual spatial positioning of the planning dataset relative to the at least one projection mapping in the augmented and/or virtual reality based on the user input. The disclosure further relates to a system, a method for registering a planning dataset with an image dataset, and a computer program product.2021-11-18
20210358221VEHICLE COMPONENT DISPLAY DEVICE AND VEHICLE COMPONENT DISPLAY METHOD - A vehicle component display device including: a memory; and a processor coupled to the memory, the processor being configured to: acquire a predetermined reference shape from a captured image of a vehicle that is captured by an imaging section; read three-dimensional data of component images corresponding to the reference shape, and display the component images at a display in a state in which the component images are superimposed on the vehicle, the display being visible to a user; display a component configuration diagram at the display together with the component images; and in a case in which a component is selected in the component configuration diagram, emphasize display of the component in the component images displayed at the display.2021-11-18
20210358222PRIVACY PRESERVING EXPRESSION GENERATION FOR AUGMENTED OR VIRTUAL REALITY CLIENT APPLICATIONS - Wearable systems for privacy preserving expression generation for augmented or virtual reality client applications. An example method includes receiving, by an expression manager configured to communicate expression information to client applications, a request from a client application for access to the expression information. The expression information reflects information derived from one or more sensors of the wearable system, with the client application being configured to present virtual content including an avatar rendered based on the expression information. A user interface is output for presentation which requests user authorization for the client application to access the expression information. In response to receiving user input indicating user authorization, enabling access to the expression information is enabled. The client application obtains periodic updates to the expression information, and the avatar is rendered based on the periodic updates.2021-11-18
20210358223INTERFERENCE BASED AUGMENTED REALITY HOSTING PLATFORMS - Interference-based augmented reality hosting platforms are presented. Hosting platforms can include networking nodes capable of analyzing a digital representation of scene to derive interference among elements of the scene. The hosting platform utilizes the interference to adjust the presence of augmented reality objects within an augmented reality experience. Elements of a scene can constructively interfere, enhancing presence of augmented reality objects; or destructively interfere, suppressing presence of augmented reality objects.2021-11-18
20210358224SYSTEMS AND METHODS FOR MIXED REALITY - A virtual image generation system comprises a planar optical waveguide having opposing first and second faces, an in-coupling (IC) element configured for optically coupling a collimated light beam from an image projection assembly into the planar optical waveguide as an in-coupled light beam, a first orthogonal pupil expansion (OPE) element associated with the first face of the planar optical waveguide for splitting the in-coupled light beam into a first set of orthogonal light beamlets, a second orthogonal pupil expansion (OPE) element associated with the second face of the planar optical waveguide for splitting the in-coupled light beam into a second set of orthogonal light beamlets, and an exit pupil expansion (EPE) element associated with the planar optical waveguide for splitting the first and second sets of orthogonal light beamlets into an array of out-coupled light beamlets that exit the planar optical waveguide.2021-11-18
20210358225TRAVERSING PHOTO-AUGMENTED INFORMATION THROUGH DEPTH USING GESTURE AND UI CONTROLLED OCCLUSION PLANES - Systems and methods are described that obtain depth data associated with a scene captured by an electronic device, obtain location data associated with a plurality of physical objects within a predetermined distance of the electronic device, generate a plurality of augmented reality objects configured to be displayed over a portion of the plurality of physical objects, and generate a plurality of proximity layers corresponding to the at least one scene, wherein a respective proximity layer is configured to trigger display of the auxiliary data corresponding to AR objects associated with the respective proximity layer while suppressing other AR objects.2021-11-18
20210358226SYSTEMS AND METHODS FOR PROVIDING AN AUGMENTED-REALITY VIRTUAL TREASURE HUNT - A method for providing a treasure hunt in augmented reality includes presenting an indication of a starting point of a path through an environment that, when followed, allows a virtual gift card to be obtained. Then, as a mobile computer system travels through the environment from a location proximate the starting point, navigation indications to allow the path to be followed are presented. Presenting the navigation indications may include capturing images of portions of the environment, detecting locations corresponding to the path, modifying a captured image based on a detected location by compositing it with a navigation indication corresponding to a direction of the path; and displaying the modified captured image. That the mobile computer system has been moved to a location proximate an ending point of the path can be detected and may trigger an update to an account to associate the virtual gift card therewith.2021-11-18
20210358227UPDATING 3D MODELS OF PERSONS - A method for updating a current three dimensional (3D) model of a person, that method may include calculating current locations, within a two-dimensional (2D) space, of current face landmark points of a face of a person within a first image; the calculating is based on the current 3D model, and one or more current acquisition parameters of a 2D camera; wherein the current 3D model of the person is located within a 3D space; calculating second locations, within the 2D space, of second face landmark points of the face of the person within a second image that follows the first image; calculating correspondences between the current locations and the second locations; calculating, based on the correspondences, locations of the second face landmark points within the 3D space; and modifying the current 3D model based on the locations of the second face landmark points within the 3D space.2021-11-18
20210358228DYNAMIC REGISTRATION OF ANATOMY USING AUGMENTED REALITY - A system for dynamic registration of autonomy using augmented reality can include an augmented reality system, an imaging system, a measuring system, and a computer system. The augmented reality system can be configured to display an augmented representation. The imaging system can be configured to image an anatomical feature of the patient and can generate anatomical imaging data. The measuring system can be configured to measure an anatomical movement of the patient and can generate an anatomical movement data. The computer system can be configured to receive the anatomical imaging and positional data and the anatomical movement data, generate the augmented representation based on the anatomical imaging data, associate the augmented representation with the anatomical movement data, render the augmented representation on the augmented reality system, and selectively update the augmented representation based on the anatomical movement data.2021-11-18
20210358229ROOM MIRROR REMOVAL MONITORING DEVICE WITH ELECTRONIC TOLL COLLECTION FUNCTION - Provided is a room mirror removal monitoring device with an electronic toll collection (ETC) function. The room mirror removal monitoring device includes a room mirror holder to which a room mirror having the ETC function is fixed and in which an insertion recess is formed toward windshield glass, a removal switch inserted into the insertion recess, a mirror base inserted into a region that does not overlap with the removal switch in the insertion recess and fixed to the windshield glass, and a monitoring part configured to monitor a removal state of the room mirror having the ETC function.2021-11-18
20210358230TIMING DEVICE - The present invention relates to a device for time measurement of a sporting movement of a person, wherein the device comprises a housing and a proximity sensor. According to the invention, the proximity sensor is formed to emit a signal for detection of the movement and to receive a reflection of the signal at the person for time measurement. To this end, the proximity sensor is arranged in the housing in such a manner that the movement in a longitudinal direction and at least in another direction can be detected.2021-11-18
20210358231VEHICLE MONITORING DEVICE, VEHICLE MONITORING METHOD, AND RECORDING MEDIUM - A vehicle monitoring device includes a communication part that communicates with one or more vehicles, an acquisition part configured to acquire identification information of the vehicle and starting information of a driving source of the vehicle from the vehicle using the communication part, and a processing part configured to extract a specified vehicle, a driving source of which is not started for a predetermined period or more, on the basis of the starting information and to perform a process corresponding to the specified vehicle.2021-11-18
20210358232MANAGING THE OPERATIONAL STATE OF A VEHICLE - A network system facilitates management of the operational states of transportation vehicles. Within a system environment, the network system also coordinates transport service between service providers operating the transportation vehicles and service requestors operating client devices. A transportation vehicle includes a processor or computing device that can determine and change operational states of the transportation vehicle. The transportation vehicle communicates operational states to one or more devices in the environment. Operational states can be communicated as vehicle datasets using a communication port of the transportation vehicle. The network system and client devices can act to enhance transport service using vehicle datasets. For example, the network system and/or client devices can manage an operational state by changing the current operational state off the transportation vehicle to a different operational state. Transport service can be enhanced at various points during the transport service coordination.2021-11-18
20210358233RIDE MONITORING SYSTEM - The system is an integrated solution that provides end-to-end visibility for 3rd party observers (parents, school administrators, socials workers, etc.), and system safety personnel (“Safe Ride Support Specialists”) to monitor each trip in real-time for safety related anomalies. These incidents generate alerts that are prioritized and distributed to the appropriate party to take action through a combination of automated and manual processes. An advantage of the system is that it does not require the passenger to have a cell phone or other mobile device. In addition, the system is active instead of passive, with alerts being issued to observers in response to certain triggers. In addition, the system can take automated actions to preserve passenger safety.2021-11-18
20210358234Downloading System Memory Data in Response to Event Detection - A method includes detecting an event occurring on a vehicle. The vehicle includes at least one computing device that controls at least one operation of the vehicle. The at least one computing device includes a first computing device comprising system memory. In response to detecting the event, data is downloaded from the system memory to a non-volatile memory device of the vehicle. In some cases, a control action for the vehicle is implemented based on analysis of the downloaded data.2021-11-18
20210358235SYSTEM AND METHOD FOR VEHICLE INSPECTION - An automated vehicle inspection system utilizing a watertight booth process of a manufacturing plant includes a diagnostic terminal mounted in a vehicle and connected to an engine control unit (ECU) of the vehicle through vehicle communication, sequentially operating individual electric components through the ECU based on a stored electric component inspection items while the vehicle passes through a watertight booth and receiving individual operating currents measured accordingly to determine whether the electric components normally operate, a transceiver connecting the diagnostic terminal and wireless diagnostic communication through an antenna disposed in the watertight booth process, and an inspector recognizing a vehicle ID of a vehicle that enters the watertight booth, transmitting inspection items according to a vehicle type and specification of the vehicle ID to the diagnostic terminal through the diagnostic communication, recognizing a vehicle ID of a vehicle that leaves the watertight booth to collect inspection results determined in the diagnostic terminal.2021-11-18
20210358236VEHICULAR POSITION ESTIMATION SYSTEM - A vehicular position estimation system estimates a portable terminal position with respect to a vehicle. Each of three or more in-vehicle communication devices is attached at a different position of the vehicle. Each of the multiple in-vehicle communication devices generates distance information indicating a distance from the in-vehicle communication device to the portable terminal. The vehicular position estimation system includes a position coordinate calculation portion that calculates a position coordinate of the portable terminal, and an area inside-outside determination portion that determines whether the portable terminal is inside the system actuation area.2021-11-18
20210358237Access system with at least one gate - The application relates to an access system (2021-11-18
20210358238AUTOMATIC EMERGENCY DOOR UNLOCK SYSTEM - In some implementations, systems and techniques are described to automatically unlock a front door of a property in response to detecting an alarm signal indicating an emergency at or near a property. Data indicating occurrence of an emergency condition at a property is initially obtained. A lock configuration for an electronic lock of the property is determined. An unlock instruction is generated for the electronic lock based on the determined lock configuration for the electronic lock. The unlock instruction is transmitted to the electronic lock such that, when the unlock instruction is received by the electronic lock, the electronic lock is unlocked according to the unlock instruction.2021-11-18
20210358239SYSTEM FOR AUTHORIZING COMMUNICATION SYSTEM TO CONTROL REMOTE DEVICE - A control module for a remote device comprises a trainable transmitter configured to communicate a radio frequency signal configured to control the remote device via a first communication protocol. The control module further comprises a communication circuit configured to communicate with a mobile device via a second communication protocol and a user interface comprising at least one user input. The control module comprises a controller configured to communicate the programming information for the remote device with the remote server via the second communication interface and assign the programming information to the at least one user input. The controller is further configured to control the trainable transmitter to output a control signal based on the programming information in response to the at least one user input. The control signal is configured to control the remote device.2021-11-18
20210358240SELF-SERVICE MODULAR DROP SAFES WITH MESSENGER ACCESS CAPABILITY - Novel modular smart management devices in the form of drop safes include the modular components of a chassis, door and technology cabinet. The drop safes enable retailers to make cash deposits quickly and safely within or near their own facilities. Various technology, including RFID readers, RFID tags, and other equipment allow the drop safes to identify each deposited bag. Employees utilize specialized apps on their mobile devices to facilitate deposit creation and other tasks. Novel methodologies for accessing the drop safes for emptying employ single-use, time-expiration type authorization codes along with other security measures to minimize risk and to provide other benefits. Novel structures along with methodologies for replacing, on-site, modular components with auto-detection of functionality during initialization and re-initialization enables for efficient replacement and upgrading of components, including the upgrading of safes to provide additional functionality.2021-11-18
20210358241SYSTEMS AND METHODS FOR LOCATION INDENTIFICATION AND TRACKING USING A CAMERA - Systems and methods for location identification and tracking of a person, object and/or vehicle. The methods involve: obtaining, by a computing system, a video of a surrounding environment which was captured by a portable camera coupled to the person, object or vehicle; comparing, by the computing system, first images of the video to pre-stored second images to identify geographic locations where the first images were captured by the portable camera; analyzing, by the computing system, the identified geographic locations to verify that the person, object or vehicle is (1) traveling along a correct path, (2) traveling towards a facility for which the person, object or vehicle has authorization to enter, or (3) traveling towards a zone or secured area internal or external to the facility for which the person, object or vehicle has authorization to enter; and transmitting a notification from the computing system indicating the results of the analyzing.2021-11-18
20210358242Quarantine Gate Apparatus For Supporting Quarantine Measures For A Facility To Be Accessed By Multiple Persons In An Non-Contact Manner - A quarantine gate apparatus to be disposed at a boundary of a facility to be accessed by multiple users is provided. The quarantine gate apparatus includes an inner part bordering on inside of the facility, an outer part bordering on outside of the facility and a conditional opening gate disposed between the inner part and the outer part. An entering person can reach the inside of the facility by sequentially passing through the outer part, the conditional opening gate, and the inner part. The quarantine gate apparatus includes a body temperature obtainer, a body temperature analyzer, an identification information receiver, an identification information storage, and a gate controller.2021-11-18
20210358243SYSTEM AND METHOD FOR BIOMETRIC ACCESS CONTROL - A process for granting or denying a user access to a system using biometrics is disclosed. The process includes receiving a unique identifier for the system, receiving a unique identifier associated with the user, and verifying that the user is authorized to access the system. A passcode is transmitted to the device in the possession of the user, and a speech sample of the user speaking the passcode is returned. One or more attributes of the speech sample is compared with one or more attributes that are expected to be in a speech sample. Access is granted or denied based upon a correlation between the file's actual attributes and the predicted attributes.2021-11-18
Website © 2022 Advameg, Inc.