45th week of 2021 patent applcation highlights part 51 |
Patent application number | Title | Published |
20210350545 | IMAGE PROCESSING METHOD AND APPARATUS, AND HARDWARE APPARATUS - The disclosure discloses an image processing method and apparatus, and a hardware apparatus. The image processing method includes the following steps: an audio is acquired, and then is preprocessed to obtain audio attribute data at each first time point of the audio; first audio attribute data corresponding to a current time point is acquired; and preset processing is performed on an image to be processed according to the first audio attribute data. According to the image processing method of the embodiments of the disclosure, when performing preset processing on the image to be processed according to the audio attribute data, the image processing can be completed only by setting the relationship between the audio attribute data and the image processing operation, which improves the flexibility and efficiency of image processing. | 2021-11-11 |
20210350546 | ADAPTIVE VIDEO STREAMING - A method, system and apparatus for image capture, analysis and transmission are provided. A link aggregation method involves identifying controller network ports to a source connected to the same subnetwork; producing packets associating corresponding controller network ports selected by the source CPU for substantially uniform selection; and transmitting the packets to their corresponding network ports. An image analysis method involves producing by a camera an indication whether a region of an image differs by a threshold extent from a corresponding region of a reference image; transmitting the indication and image data to a controller via a communications network; and storing at the controller the image data and the indication in association therewith. The controller may perform operations according to positive indications. A transmission method involves receiving user input in respect of a video stream and transmitting, in accordance with the user input, selected data packets of selected image frames thereof. | 2021-11-11 |
20210350547 | LEARNING APPARATUS, FOREGROUND REGION ESTIMATION APPARATUS, LEARNING METHOD, FOREGROUND REGION ESTIMATION METHOD, AND PROGRAM - Estimation data indicating a foreground region is generated with high precision. A learning apparatus includes an input-image acquisition section that acquires a combination of a first input image representing a background and a foreground and a second input image representing the background in a mode different from that in the first input image, and a learning section that includes an estimation section to generate estimation data indicating the foreground region in the first input image in response to input of the first input image and the second input image, and that conducts learning at the estimation section on the basis of a given teacher image and the estimation data that is generated when the first input image and the second input image are inputted. | 2021-11-11 |
20210350548 | METHOD FOR DETERMINING A RELATIVE MOVEMENT USING A DIGITAL IMAGE SEQUENCE - A method for determining a movement of a device relative to at least one object based a digital image sequence of the object recorded from the location of the device. The method includes computing a plurality of optical flow fields from image pairs of the digital image sequence; finding the position of an object in a partial image region in the most current image in each case and assigning the partial image region to the object; forming a plurality of partial optical flow fields from the plurality of optical flow fields; selecting a partial flow fields from the plurality of partial flow fields in accordance with at least one criterion to facilitate the estimation of a change in scale of the object; and estimating the change in scale for the at least one object using the assigned partial image region based on the selected partial flow field. | 2021-11-11 |
20210350549 | MOTION LEARNING WITHOUT LABELS - A machine learning model is described that is trained without labels to predict a motion field between a pair of images. The trained model can be applied to a distinguished pair of images to predict a motion field between the distinguished pair of images. | 2021-11-11 |
20210350550 | GAZE ESTIMATION USING ONE OR MORE NEURAL NETWORKS - Apparatuses, systems, and techniques are presented to estimate user gaze. In at least one embodiment, one or more neural networks are used to determine coarse and fine gaze estimates for one or more users. | 2021-11-11 |
20210350551 | ESTIMATION APPARATUS, LEARNING APPARATUS, ESTIMATION METHOD, LEARNING METHOD, AND PROGRAM - An estimation apparatus, a learning apparatus, an estimation method, and a learning method, and a program capable of accurate body tracking without attaching many trackers to a user are provided. A feature extraction section ( | 2021-11-11 |
20210350552 | SELECTING VIEWPOINTS FOR RENDERING IN VOLUMETRIC VIDEO PRESENTATIONS - One example of a method includes receiving a plurality of video streams depicting a scene, wherein the plurality of video streams provides images of the scene from a plurality of different viewpoints, identifying a target that is present in the scene, wherein the target is identified based on a determination of a likelihood of being of interest to a viewer of the scene, determining a trajectory of the target through the plurality of video streams, wherein the determining is based in part on an automated visual analysis of the plurality of video streams, wherein the determining is based in part on a visual analysis of the plurality of video streams, rendering a volumetric video traversal that follows the target through the scene, wherein the rendering comprises compositing the plurality of video streams, receiving viewer feedback regarding the volumetric video traversal, and adjusting the rendering in response to the viewer feedback. | 2021-11-11 |
20210350553 | IMAGE SLEEP ANALYSIS METHOD AND SYSTEM THEREOF - An image sleep analysis method and system thereof are disclosed. During sleep duration, a plurality of visible-light images of a body are obtained. Positions of image differences are determined by comparing the visible-light images. A plurality of features of the visible-light images are identified and positions of the features are determined. According to the positions of the image differences and features, the motion intensities of the features are determined. Therefore, a variation of the motion intensities is analyzed and recorded to provide accurate sleep quality. | 2021-11-11 |
20210350554 | EYE-TRACKING SYSTEM - An eye-tracking system configured to: receive a reference-image of an eye of a user, the reference-image being associated with reference-eye-data; receive one or more sample-images of the eye of the user; and, for each of the one or more sample-images: determine a difference between the reference-image and the sample-image to define a corresponding differential-image; and determine eye-data for the sample-image based on the differential-image and the reference-eye-data associated with the reference-image. | 2021-11-11 |
20210350555 | SYSTEMS AND METHODS FOR DETECTING PROXIMITY EVENTS - Systems and techniques are provided for tracking puts and takes of inventory items by sources and sinks in an area of real space. The system can include sensors producing a plurality of sequences of images of corresponding fields of view in the real space. The system can include image recognition logic, receiving sequences of images from the plurality of sequences. The image recognition logic processes the images in sequences to identify locations of sources and sinks over time represented in the images. The system can include logic to process the identified locations of sources and sinks over time to detect an exchange of an inventory item between sources and sinks. | 2021-11-11 |
20210350556 | METHODS FOR IDENTIFYING DENDRITIC PORES - A method for identifying a dendritic pore is provided. Line and pore images are obtained from a digital image of a subject's skin. These line and pore images are overlaid to identify those pores having at least one line intersecting the pore as dendritic pores. | 2021-11-11 |
20210350557 | RAPID EFFECTIVE CASE DEPTH MEASUREMENT OF A METAL COMPONENT USING PHYSICAL SURFACE CONDITIONING - A method for determining an effective case depth of a metal component includes forming a conditioned core surface by blasting or shot peening an exposed surface of the metal component with blast media. The exposed surface is a contiguous exposed surface of the case and core. The method includes measuring surface texture, compressive stresses, or another suitable characteristic of the conditioned core surface using a surface metrology sensor, and identifying a case-core boundary using the measured characteristic, including identifying a location at which a predetermined difference or gradient in the characteristic is present within the exposed surface. The method also includes measuring the effective case depth as a perpendicular distance between a reference surface of the case and the case-core boundary. | 2021-11-11 |
20210350558 | SYSTEM FOR ASSEMBLING COMPOSITE GROUP IMAGE FROM INDIVIDUAL SUBJECT IMAGES - A system for assembling a group composite image from individual subject images is described. Often subjects of a group vary in height. Additionally, as each subject is photographed individually, different zoom factors can be applied by a camera that affects a pixel density of the image captured. The system includes a fiducial marking device that emits collimated light to form one or more fiducial markers on a subject while an image is captured by the camera. Based on a location of the fiducial markers in the image, a pixel density of the image and a reference height of the subject can be determined. The individual subject image can be scaled based on the pixel density and reference height to account for the varying subject heights and zoom factors to generate a group composite image that accurately represents the subjects of the group relative to one another. | 2021-11-11 |
20210350559 | IMAGE DEPTH ESTIMATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM - An image depth estimation method, including: obtaining a reference frame corresponding to a current frame and an inverse depth space range of the current frame; performing pyramid downsampling processing on the current frame and the reference frame respectively to obtain k layers of current images corresponding to the current frame and k layers of reference images corresponding to the reference frame, where k is a natural number greater than or equal to 2; and performing inverse depth estimation iteration processing on the k layers of current images based on the k layers of reference images and the inverse depth space range to obtain inverse depth estimation results of the current frame. | 2021-11-11 |
20210350560 | DEPTH ESTIMATION - An image processing system to estimate depth for a scene. The image processing system includes a fusion engine to receive a first depth estimate from a geometric reconstruction engine and a second depth estimate from a neural network architecture. The fusion engine is configured to probabilistically fuse the first depth estimate and the second depth estimate to output a fused depth estimate for the scene. The fusion engine is configured to receive a measurement of uncertainty for the first depth estimate from the geometric reconstruction engine and a measurement of uncertainty for the second depth estimate from the neural network architecture, and use the measurements of uncertainty to probabilistically fuse the first depth estimate and the second depth estimate. | 2021-11-11 |
20210350561 | PACKAGE MEASURING APPARATUS, PACKAGE ACCEPTING SYSTEM, PACKAGE MEASURING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A package measuring apparatus includes a depth sensor, the package having a rectangular parallelepiped shape and placed on a mounting table, and at least one processor. The processor obtains spatial coordinates of four vertices within a space in which the center of the depth sensor is set as the point of origin based on data of a distance from the depth sensor to each of the four vertices and data of a position of each sensor element of the depth sensor corresponding to each of the four vertices. The processor calculates, based on the spatial coordinates of the four vertices, a length of each of three sides defined between a first vertex and three other vertices. | 2021-11-11 |
20210350562 | METHODS AND APPARATUS FOR DETERMINING VOLUMES OF 3D IMAGES - The techniques described herein relate to methods, apparatus, and computer readable media configured to determine an estimated volume of an object captured by a three-dimensional (3D) point cloud. A 3D point cloud comprising a plurality of 3D points and a reference plane in spatial relation to the 3D point cloud is received. A 2D grid of bins is configured along the reference plane, wherein each bin of the 2D grid comprises a length and width that extends along the reference plane. For each bin of the 2D grid, a number of 3D points in the bin and a height of the bin from the reference plane is determined. An estimated volume of an object captured by the 3D point cloud based on the calculated number of 3D points in each bin and the height of each bin. | 2021-11-11 |
20210350563 | Determining the location of a mobile device - A computer-implemented method of determining the location of a mobile device comprising a camera. The method comprises the steps of capturing, using the camera, a sequence of images over a period of time; for pairs of consecutive images from the sequence of images, determining, using a first neural network, features indicative of the motion of the device between the time the first image of the pair of images was captured and the time the second image of the pair of images was captured; for a sequence of consecutive images, determining, using a second neural network, features indicative of the location of the device from the features determined by the first neural network; and for a sequence of consecutive images, determining the location of the device from the features determined by the second neural network. | 2021-11-11 |
20210350564 | DISPLAY SYSTEMS AND METHODS FOR ALIGNING DIFFERENT TRACKING MEANS - A display system including: display apparatus; display-apparatus-tracking means; input device; processor. The processor is configured to: detect input event and identify actionable area of input device; process display-apparatus-tracking data to determine pose of display apparatus in global coordinate space; process first image to identify input device and determine relative pose thereof with respect to display apparatus; determine pose of input device and actionable area in global coordinate space; process second image to identify user's hand and determine relative pose thereof with respect to display apparatus; determine pose of hand in global coordinate space; adjust poses of input device and actionable area and pose of hand such that adjusted poses align with each other; process first image, to generate extended-reality image in which virtual representation of hand is superimposed over virtual representation of actionable area; and render extended-reality image. | 2021-11-11 |
20210350565 | CALIBRATION OF AN EYE TRACKING SYSTEM - There is provided mechanisms for calibration of an eye tracking system. An eye tracking system comprises a pupil centre corneal reflection (PCCR) based eye tracker and a non-PCCR based eye tracker. A method comprises obtaining at least one first eye position of a subject by applying the PCCR based eye tracker on an image set depicting the subject. The method comprises calibrating a head model of the non-PCCR based eye tracker, as applied on the image set, for the subject using the obtained at least one first eye position from the PCCR based eye tracker as ground truth. The head model comprises facial features that include at least one second eye position. The calibrating involves positioning the head model in order for its at least one second eye position to be consistent with the at least one first eye position given by the PCCR based eye tracker. | 2021-11-11 |
20210350566 | DEEP NEURAL NETWORK POSE ESTIMATION SYSTEM - A deep neural network provides real-time pose estimation by combining two custom deep neural networks, a location classifier and an ID classifier, with a pose estimation algorithm to achieve a 6D0F location of a fiducial marker. The locations may be further refined into subpixel coordinates using another deep neural network. The networks may be trained using a combination of auto-labeled videos of the target marker, synthetic subpixel corner data, and/or extreme data augmentation. The deep neural network provides improved pose estimations particularly in challenging low-light, high-motion, and/or high-blur scenarios. | 2021-11-11 |
20210350567 | PREDICTIVE VISUALIZATION OF MEDICAL IMAGING SCANNER COMPONENT MOVEMENT - An augmented reality system is provided for use with a medical imaging scanner. The AR system obtains a digital image from a camera, and identifies a pose of a gantry of the medical imaging scanner based on content of the digital image. The gantry includes a movable C-arm supporting an imaging signal transmitter and a detector panel that are movable along an arc relative to a station. A range of motion of the movable C-arm along the arc is determined based on the pose. A graphical object is generated based on the range of motion and the pose, and is provided to a display device for display as an overlay relative to the medical imaging scanner. | 2021-11-11 |
20210350568 | SYSTEMS AND METHODS TO CHECK-IN SHOPPERS IN A CASHIER-LESS STORE - Systems and techniques are provided for linking subjects in an area of real space with user accounts. The user accounts are linked with client applications executable on mobile computing devices. A plurality of cameras are disposed above the area. The cameras in the plurality of cameras produce respective sequences of images in corresponding fields of view in the real space. A processing system is coupled to the plurality of cameras. The processing system includes logic to determine locations of subjects represented in the images. The processing system further includes logic to match the identified subjects with user accounts by identifying locations of the mobile computing devices executing client applications in the area of real space and matching locations of the mobile computing devices with locations of the subjects. | 2021-11-11 |
20210350569 | APPARATUS FOR DETERMINING ARRANGEMENT OF OBJECTS IN SPACE AND METHOD THEREOF - A method of embedding a vector representing an arrangement state of objects in a 3D space according to an embodiment includes dividing a target space into M×N subspaces extending from a first surface among planes forming the target space having a cuboid shape, calculating, for each of the subspaces, a feature representing an object arrangement state in the subspace, inputting data representing the feature of each of the subspaces into an artificial neural network and obtaining, from the artificial neural network, an embedding vector representing an arrangement state of an object in the target space. | 2021-11-11 |
20210350570 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - The present disclosure relates to an image processing device, an image processing method, and a program that can reduce the amount of processing required for a series of processing from detection to recognition of an object in a high-resolution image. | 2021-11-11 |
20210350571 | 10-20 SYSTEM-BASED POSITION INFORMATION PROVIDING METHOD - Provided is a 10-20 system-based position information providing method performed by a computer. The method comprises the steps of: obtaining a head image of a subject; receiving, from a user, an input of at least four reference points on the basis of the head image; calculating central coordinates in the head image on the basis of the at least four reference points; and providing 10-20 system-based position information on the basis of the central coordinates. | 2021-11-11 |
20210350572 | POSITIONING METHOD, APPARATUS, DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM - This application provides a positioning method performed at a computer device. The method includes: obtaining an image photographed by an image capturing apparatus disposed at a roadside and a photographing orientation of the image; determining pixel coordinates in the image for a target object in the image; determining a relationship between the pixel coordinates corresponding to the photographing orientation and location coordinates in a physical world; and determining location coordinates of the target object in the physical world according to the relationship and the pixel coordinates. According to this application, location information of an object can be accurately determined according to a correspondence between pixel coordinates in an image and location coordinates in a physical world, and a transform relationship between pixel coordinates in different photographing orientations, and avoiding the problem that the target object cannot be positioned due to a blind area in a view field. | 2021-11-11 |
20210350573 | SYSTEMS AND METHODS FOR CHARACTERIZING OBJECT POSE DETECTION AND MEASUREMENT SYSTEMS - A method for characterizing a pose estimation system includes: receiving, from a pose estimation system, first poses of an arrangement of objects in a first scene; receiving, from the pose estimation system, second poses of the arrangement of objects in a second scene, the second scene being a rigid transformation of the arrangement of objects of the first scene with respect to the pose estimation system; computing a coarse scene transformation between the first scene and the second scene; matching corresponding poses between the first poses and the second poses; computing a refined scene transformation between the first scene and the second scene based on coarse scene transformation, the first poses, and the second poses; transforming the first poses based on the refined scene transformation to compute transformed first poses; and computing an average rotation error and an average translation error of the pose estimation system based on differences between the transformed first poses and the second poses. | 2021-11-11 |
20210350574 | CONVOLUTION-BASED CAMERA AND DISPLAY CALIBRATION - Techniques for calibrating cameras and displays are disclosed. An image of a target is captured using a camera. The target includes a tessellation having a repeated structure of tiles. The target further includes unique patterns superimposed onto the tessellation. Matrices are formed based on pixel intensities within the captured image. Each of the matrices includes values each corresponding to the pixel intensities within one of the tiles. The matrices are convolved with kernels to generate intensity maps. Each of the kernels is generated based on a corresponding unique pattern of the unique patterns. An extrema value is identified in each of the intensity maps. A location of each of the unique patterns within the image is determined based on the extrema value for each of the intensity maps. A device calibration is performed using the location of each of the unique patterns. | 2021-11-11 |
20210350575 | THREE-DIMENSIONAL CAMERA POSE DETERMINATION - An example system includes a plurality of cameras to capture images of a target object and a controller to connect to the plurality of cameras. The controller is to control the plurality of cameras to capture the images of the target object. The controller is further to determine a pose of a camera of the plurality of cameras. The system further includes a platform to support the target object. The platform includes a plurality of unique markers arranged in a predetermined layout. The controller is further to determine the pose of the camera based on unique markers of the plurality of unique markers that are detected in an image captured by the camera. | 2021-11-11 |
20210350576 | NON-RIGID STEREO VISION CAMERA SYSTEM - A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system. | 2021-11-11 |
20210350577 | IMAGE ANALYSIS DEVICE, IMAGE ANALYSIS METHOD, AND PROGRAM - An image analysis device according to the present invention includes: an image capturing unit that captures a subject; a light emitting unit that emits light to the subject; a control unit that causes the image capturing unit to capture a first image of the subject while causing the light emitting unit to emit light and causes the image capturing unit to capture a second image of the subject while not causing the light emitting unit to emit light; and an estimation unit that estimates color of the subject based on a differential value between color information on the first image and the color information on the second image. | 2021-11-11 |
20210350578 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM - A comparison area detection unit of image processing circuitry detects an image area of a fading determination object on the basis of an image signal generated by an image capturing unit using a color filter as a comparison area. A color information generation unit generates color information from the image signal of the comparison area and uses the color information as comparison target information. A color information comparison unit compares the color information of the fading determination object as fading determination reference information with the comparison target information. A fading information generation unit generates fading information indicating that a fading level of the color filter exceeds a predetermined level or the fading level of the color filter on the basis of comparison information indicating a comparison result between the fading determination reference information and the comparison target information. The fading state of the color filter can be detected with precision. | 2021-11-11 |
20210350579 | Resolution of a Picture - A resolution is determined for a picture ( | 2021-11-11 |
20210350580 | Pattern-Based Image Data Compression - Methods and compression units for compressing a two-dimensional block of image element values. The method includes: dividing the two-dimensional block of image element values into a plurality of sub-blocks of image element values; identifying which pattern of a plurality of patterns is formed by the image element values of a first sub-block of the plurality of sub-blocks; and forming a compressed block of image element values by encoding the first sub-block in the compressed block of image element values with: (i) information identifying the pattern, and (ii) the image element values of the first sub-block forming the pattern. | 2021-11-11 |
20210350581 | ENCODING DEVICE, ENCODING METHOD, AND DECODING DEVICE - An encoding device includes: an encoding processing section that performs encoding processing on image data serving as a processing target; and a control unit that controls the encoding processing to make a bit rate in a rate control area higher than a bit rate in an area other than the rate control area, the rate control area being located near a division boundary when the image data serving as the processing target is divided into a plurality of regions. | 2021-11-11 |
20210350582 | POINT CLOUD GLOBAL TETRIS PACKING - A method of mapping 3D point cloud data into 2D surfaces for further efficient temporal coding is described herein. Point cloud global tetris packing utilizes 3D surface patches to represent point clouds and performs temporally consistent global mapping of 3D patch surface data into 2D canvas images. | 2021-11-11 |
20210350583 | Methods and Devices for Binary Entropy Coding of Point Clouds - Methods and devices for encoding a point cloud. A bit sequence signalling an occupancy pattern for sub-volumes of a volume is coded using binary entropy coding. For a given bit in the bit sequence, a context may be based on a sub-volume neighbour configuration for the sub-volume corresponding to that bit. The sub-volume neighbour configuration depends on an occupancy pattern of a group of sub-volumes of neighbouring volumes to the volume, the group of sub-volumes neighbouring the sub-volume corresponding to the given bit. The context may be further based on a partial sequence of previously-coded bits of the bit sequence. | 2021-11-11 |
20210350584 | METHOD AND SYSTEM FOR JOINT OPTIMIZATION OF ISP AND VISION TASKS, MEDIUM AND ELECTRONIC DEVICE - The present disclosure relates to a method and a system for joint optimization of an ISP and vision tasks, a medium and an electronic device, which belong to the field of image processing and can effectively avoid the over-fitting of joint optimization of the ISP and the vision tasks. The method for joint optimization of the ISP and the vision tasks includes the following steps: performing image signal processing on raw image dataset by an ISP to obtain processed image dataset; measuring probability gradient of the processed image dataset in prior distribution of traditional image dataset by a measurement module; and performing vision tasks on the processed image dataset by using a loss function with the probability gradient as a regularization term via a neural network. | 2021-11-11 |
20210350585 | LOW RANK MATRIX COMPRESSION - In an example, an apparatus comprises logic, at least partially including hardware logic, to implement a lossy compression algorithm which utilizes a data transform and quantization process to compress data in a convolutional neural network (CNN) layer. Other embodiments are also disclosed and claimed. | 2021-11-11 |
20210350586 | METHODS AND APPARATUSES FOR PERFORMING ARTIFICIAL INTELLIGENCE ENCODING AND ARTIFICIAL INTELLIGENCE DECODING ON IMAGE - Provided is an artificial intelligence (AI) decoding apparatus includes: a memory storing one or more instructions; and a processor configured to execute the one or more instructions stored in the memory, the processor is configured to: obtain AI data related to AI down-scaling an original image to a first image; obtain image data corresponding to an encoding result on the first image; obtain a second image corresponding to the first image by performing a decoding on the image data; obtain deep neural network (DNN) setting information among a plurality of DNN setting information from the AI data; and obtain, by an up-scaling DNN, a third image by performing the AI up-scaling on the second image, the up-scaling DNN being configured with the obtained DNN setting information, wherein the plurality of DNN setting information comprises a parameter used in the up-scaling DNN, the parameter being obtained through joint training of the up-scaling DNN and a down-scaling DNN, and wherein the down-scaling DNN is used to obtain the first image from the original image. | 2021-11-11 |
20210350587 | Device for Marking Image Data - A device including a display panel, a storage circuit, and a processing circuit is provided. The display panel of the device is utilized to display a first image data, and a region of interest can be circled on the first image data. The processing circuit of the device is utilized to receive at least one first mark content and a first region information of the region of interest, and connect the first region information of the region of interest to the at least one first mark content. In this manner, medical professionals can quickly make an initial diagnosis by means of the at least one first mark content of the first image data. | 2021-11-11 |
20210350588 | MECHANICAL FASTENING UNIT MANAGEMENT METHOD USING AUGMENTED REALITY - In an operation management system in the related art, it is necessary to add an identification mark such as an RFID tag to each component to be managed and prepare a fastening tool having an antenna. In contrast, a mechanical fastening unit management method is provided using an augmented reality space generated by superimposing a virtual space on a real space. In the augmented reality space where a real fastening unit (RBn) and a virtual fastening unit (IBn) are in a one-to-one correspondence, information that the real fastening unit is selected as a fastening target is acquired with a camera or the like, and analyzed by an augmented reality server connected to the camera. Accordingly, it is possible to easily record that a predetermined operation is progressing as scheduled, and to provide a high quality mechanical fastening unit management method or system that is low in operation cost. | 2021-11-11 |
20210350589 | SYSTEMS AND METHODS FOR OBTAINING OPINION DATA FROM INDIVIDUALS VIA A WEB WIDGET AND DISPLAYING A GRAPHIC VISUALIZATION OF AGGREGATED OPINION DATA WITH WAVEFORMS THAT MAY BE EMBEDDED INTO THE WEB WIDGET - Opinion data may be obtained from individuals via a web widget and a graphic visualization of aggregated opinion data may be displayed via the web widget. The web widget may be provided for presentation via one or more third-party webpages. The web widget may include an input portion and a graphic visualization portion. A first instance of the web widget may be presented via a first third-party webpage. Via the input portion, input from users may be received on a plurality of aspects of one or more topics. The input may convey users' opinions of the plurality of aspects. The input may be received responsive to the users manipulating the input portion of the web widget. The graphical visualization may be updated in real time to represent input received from a plurality of users from a plurality of third-party web sites, or anywhere where the web widget may be displayed. | 2021-11-11 |
20210350590 | METHOD AND DEVICE FOR IMAGING OF LENSLESS HYPERSPECTRAL IMAGE - Disclosed are a hyperspectral imaging method and an apparatus thereof. A method of reconstructing a hyperspectral image includes receiving an image photographed through a diffractive optical element and reconstructing a hyperspectral image of the received image based on the received image and information about a point spread function for each wavelength of the diffractive optical element. The diffractive optical element may generate an anisotropic shape of the point spread function that varies with a spectrum. | 2021-11-11 |
20210350591 | Inter-Frame Motion Correction in Whole-Body Direct Parametric Image Reconstruction - A method for parametric image reconstruction and motion correction using whole-body motion fields includes receiving a nuclear imaging data set including a set of dynamic frames and generating at least one of a whole-body forward motion field and/or a whole-body inverse motion field for at least one frame in the set of frames. An iterative loop is applied to update at least one parameter used in a direct parametric reconstruction and at least one parametric image is generated based on the at least one parameter updated by the iterative loop. The iterative loop includes calculating a frame emission image for the at least one frame, generating a motion-corrected frame emission image based on the at least one whole-body forward motion field or a whole-body inverse motion field, and updating at least one parameter by applying a fit to the motion-corrected frame emission image. | 2021-11-11 |
20210350592 | RECONSTRUCTION OF MR IMAGES BY MEANS OF WAVE-CAIPI - A method serves for MR-based reconstruction of images of a patient. Whether a value of a movement of the patient in at least one motion direction during an MR scan exceeds a respective threshold value is monitored. If this is not the case, an image reconstruction is performed by a Wave-CAIPI method on the basis of identical calibrated PSF subfunctions for all k-space lines. When this is the case, a number of bins are provided that correspond to sequential value ranges of the patient movement in at least one motion direction, the k-space lines are assigned to the bins based on a movement value determined during their respective acquisition, a calibration of PSF subfunctions is performed for at least two bins on the basis of the k-space lines assigned to said bins, and an image reconstruction is performed by a Wave-CAIPI method in such a way that the PSF subfunctions associated with the assigned bins are used for the respective k-space lines. | 2021-11-11 |
20210350593 | EFFICIENT MOTION-COMPENSATION IN CONE BEAM COMPUTED TOMOGRAPHY BASED ON DATA CONSISTENCY - An image processing system (IPS), comprising an input interface (IN) for receiving a projection image from a plurality of projection images of a movable object (PAT) acquired along different directions by an imaging apparatus (XA), the projection images defined in a projection domain spanned by a radiation sensitive surface of the detector (D). The system includes a motion checker (MC) configured to operate in the projection domain to decide whether the projection image is corrupted by motion of the object during acquisition. | 2021-11-11 |
20210350594 | DIGITAL PROCESSING SYSTEMS AND METHODS FOR CUSTOMIZED CHART GENERATION BASED ON TABLE DATA SELECTION IN COLLABORATIVE WORK SYSTEMS - Systems, methods, and computer-readable media for customizing chart generation based on table data selection are disclosed. The systems and methods may involve at least one processor that is configured to maintain at least one table containing rows, receive a first selection of at least one cell in the at least one table, generate a graphical representation associated with the first selection of at least one other cell, generate a first selection-dependent link between the at least one table and the graphical representation, receive a second selection of at least one cell in the at least one table, alter the graphical representation based on the second selection, and generate a second selection-dependent link between the at least one table and the graphical representation. | 2021-11-11 |
20210350595 | METHODS AND APPARATUS FOR GENERATING POINT CLOUD HISTOGRAMS - The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin. | 2021-11-11 |
20210350596 | AUGMENTED REALITY DIAGNOSTIC TOOL FOR DATA CENTER NODES - An augmented reality (AR) diagnostic tool embodied as a software application on a portable device employs AR infrastructure to enable a user to locate a failed/malfunctioning node of a cluster and, with minimal interaction, diagnose causes and provide recommendations to repair the node. The portable device may be a computer embodied as visualization technology and configured to execute the software application. Once installed, the AR diagnostic (ARD) tool is ready for use by the user, e.g., a customer service technician, to locate and repair one or more failed cluster nodes. In response to a failure/malfunction, the cluster node sends diagnostic and configuration information (i.e., failure/malfunction information) of the failed node to an analytics service. The failure information informs the technician of the cluster failure. The technician may then activate the ARD tool and AR infrastructure to locate and repair the failed node. | 2021-11-11 |
20210350597 | PREDICTIVE VIEWPORT RENDERER AND FOVEATED COLOR COMPRESSOR - An embodiment of a graphics apparatus may include a focus identifier to identify a focus area, and a color compressor to selectively compress color data based on the identified focus area. Another embodiment of a graphics apparatus may include a motion detector to detect motion of a real object, a motion predictor to predict a motion of the real object, and an object placer to place a virtual object relative to the real object based on the predicted motion of the real object. Another embodiment of a graphics apparatus may include a frame divider to divide a frame into viewports, a viewport prioritizer to prioritize the viewports, a renderer to render a viewport of the frame in order in accordance with the viewport priorities, and a viewport transmitter to transmit a completed rendered viewport. Other embodiments are disclosed and claimed. | 2021-11-11 |
20210350598 | SYSTEM AND METHOD FOR DIGITAL MARKUPS OF CUSTOM PRODUCTS - Techniques for generating and using digital markups on digital images are presented. In an embodiment, a method comprises receiving, at an electronic device, a digital layout image that represents a form of a product for manufacturing a reference product; generating a digital markup layout by overlaying the digital markup image over the digital layout image; based on the digital markup layout, generating one or more manufacturing files comprising digital data for manufacturing the reference product; receiving a digital reference image of the reference product manufactured based on the one or more manufacturing files; identifying one or more found markup regions in the digital reference image; based on the found markup regions, generating a geometry map and an interactive asset image; based on, at least in part, the geometry map, generating a customized product image by applying a user pattern to the interactive asset image. | 2021-11-11 |
20210350599 | SEAMLESS REPRESENTATION OF VIDEO AND GEOMETRY - Processes for reviewing and editing a computer-generated animation are provided. In one example process, multiple images representing segments of a computer-generated animation may be displayed. In response to a selection of one or more of the images, geometry data associated with the corresponding segment(s) of computer-generated animation may be accessed. An editable geometric representation of the selected segment(s) of computer-generated animation may be displayed based on the accessed geometry data. In some examples, previously rendered representations and/or geometric representations of the same or other segments of the computer-generated animation may be concurrently displayed adjacent to, overlaid with, or in any other desired manner with the displayed geometric representation of the selected segment's) of computer-generated animation. | 2021-11-11 |
20210350600 | ANIMATION FILE PROCESSING METHOD AND APPARATUS, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER DEVICE - An animation file processing method and apparatus, a computer-readable storage medium, and a computer device are provided. The method includes: obtaining a bitmap image sequence corresponding to an original animation file; encoding a differential pixel region between a bitmap image in the bitmap image sequence and a corresponding key bitmap image when the bitmap image is a non-key bitmap image, to obtain an encoded picture corresponding to the bitmap image; and generating an animation export file corresponding to the original animation file according to encoded pictures corresponding to bitmap images in the bitmap image sequence. | 2021-11-11 |
20210350601 | ANIMATION RENDERING METHOD AND APPARATUS, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER DEVICE - An animation rendering method is provided. The method includes: obtaining an animation file in a target format; determining, in response to determining that the animation file is decoded, an animation drawing data interval meeting a stationary condition from animation drawing data obtained through decoding; caching initial animation drawing data in the animation drawing data interval; reading, in response to determining that animation drawing data corresponding to a to-be-played frame meets the stationary condition in a playback process of the animation file, the cached initial animation drawing data corresponding to the to-be-played frame; and performing animation rendering according to the read initial animation drawing data. | 2021-11-11 |
20210350602 | DYNAMIC VISION SENSOR FOR VISUAL AUDIO PROCESSING - To track certain difficult facial features during speech such as the corners of the mouth and the teeth, a camera sensor system generates RGB/IR images and the system also uses light intensity change signals from an event driven sensor (EDS), as well as voice analysis. In this way, the camera sensor system enables improved performance tracking (equivalent to using very high-speed camera) at lower bandwidth and power consumption. | 2021-11-11 |
20210350603 | SYSTEMS, METHODS, AND DEVICES FOR CREATING A SPLINE-BASED VIDEO ANIMATION SEQUENCE - A spline-based animation process creates an animation sequence. The process receives a plurality of frames that illustrate a figure based on a design template (e.g., which includes a skeleton having segments). The process further identifies a spine segment, generates hip, shoulder, and head segments at respective positions relative to the spine segment, identifies limb and facial feature segments, and converts the segments into respective splines bound between endpoints. The process further determines changes between frames for respective splines and animates movement of the figure over a sequence of frames based on the changes. | 2021-11-11 |
20210350604 | AUDIOVISUAL PRESENCE TRANSITIONS IN A COLLABORATIVE REALITY ENVIRONMENT - Examples of systems and methods to facilitate audiovisual presence transitions of virtual objects such as virtual avatars in a mixed reality collaborative environment are disclosed. The systems and methods may be configured to produce different audiovisual presence transitions such as appearance, disappearance and reappearance of the virtual avatars. The virtual avatar audiovisual transitions may be further indicated by various visual and sound effects of the virtual avatars. The transitions may occur based on various colocation or decolocation scenarios. | 2021-11-11 |
20210350605 | ANIMATION DATA ENCODING/DECODING METHOD AND APPARATUS, STORAGE MEDIUM, AND COMPUTER DEVICE - An animation data encoding method includes obtaining animation data corresponding to an animation tag code from an animation project file. In response to a determination that attribute types exist in an attribute structure table corresponding to the animation tag code, the method further includes determining attribute flag information corresponding to each attribute in the attribute structure table. The method also includes encoding, by processing circuitry of a computer device, an attribute value corresponding to each attribute in the animation data according to the attribute flag information, to obtain attribute content corresponding to each attribute. The method further includes, for each attribute, sequentially storing the corresponding attribute flag information and the corresponding attribute content according to an attribute order of the attribute structure table, to obtain a dynamic attribute data block corresponding to the animation tag code. | 2021-11-11 |
20210350606 | METHOD FOR EFFICIENT CONSTRUCTION OF HIGH RESOLUTION DISPLAY BUFFERS - Graphics processing systems and methods are disclosed which may minimize invocations to a pixel shader in order to improve efficiency in a rendering pipeline. In implementations of the present disclosure, a plurality of samples within a pixel may be covered by a primitive. The plurality of samples may include one or more color samples and a plurality of depth samples. The nature of the samples which were covered by the primitive may be taken into account before invoking a pixel shader to perform shading computations on the pixel. In implementations of the present disclosure, if at least one sample is covered by a primitive, but none of the samples are color samples, an invocation to a pixel shader may be avoided. | 2021-11-11 |
20210350607 | TECHNIQUES FOR RAY CONE TRACING AND TEXTURE FILTERING - One embodiment of a method for computing a texture color includes tracing a ray cone through a graphics scene, determining a curvature of a first surface within the graphics scene at a point where the ray cone hits the first surface based on differential barycentric coordinates associated with the point, determining, based on the curvature of the first surface, a width of the ray cone at a subsequent point where the ray cone hits a second surface within the graphics scene, and computing a texture color based on the width of the ray cone | 2021-11-11 |
20210350608 | TECHNIQUES FOR ANISOTROPIC TEXTURE FILTERING USING RAY CONES - One embodiment of a method for computing a texture color includes tracing a ray cone through a graphics scene, determining at least one axis of an ellipse formed by the ray cone intersecting a plane associated with geometry within the graphics scene at a hit point, computing one or more gradients along the at least one axis of the ellipse, and computing a texture color based on the one or more gradients and a texture. | 2021-11-11 |
20210350609 | APPARATUS AND METHOD FOR HIERARCHICAL BEAM TRACING AND PACKET COMPRESSION IN A RAY TRACING SYSTEM - An apparatus and method for compressing ray tracing data prior to transmission between nodes. For example, one embodiment of an apparatus comprises: a first node comprising a first ray tracing engine, the first node communicatively coupled to a second node comprising a second ray tracing engine; first compression circuitry coupled to the first ray tracing engine, the first compression circuitry to perform compression on ray tracing data of the first ray tracing engine to produce a first compressed stream of ray tracing data; and interface circuitry to transmit the first compressed stream of ray tracing data from the first node to the second node. | 2021-11-11 |
20210350610 | IMAGE DATA PROCESSING METHOD AND APPARATUS - A medical image processing apparatus comprises a buffer; and processing circuitry configured to: obtain a volumetric image data set; determine, from the volumetric image data set, a plurality of intervals along a path through the volumetric image data set, each interval having a respective depth from a reference position of the path; for each of the plurality of intervals, determine respective parameter values of a respective continuous function representative of a transparency of the interval; store the parameter values for the continuous functions to the buffer; and generate a rendered image using the stored parameter values for the continuous functions. | 2021-11-11 |
20210350611 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - The present disclosure relates to an information processing apparatus, an information processing method, and a program that enable to ensure visibility when a virtual object is shielded. An information processing apparatus is provided that includes a display control unit that controls a display so as to display a virtual object by using a first display parameter in a first state where it is determined that the virtual object displayed by the display is hidden by at least one real object as viewed from a first user of the display and display the virtual object by using a second display parameter different from the first display parameter in a second state where it is determined that the virtual object is hidden by real objects more than that in the first state as viewed from the first user. It is possible to apply the present disclosure to, for example, a device included in an augmented reality system. | 2021-11-11 |
20210350612 | CONNECTING SPATIAL ANCHORS FOR AUGMENTED REALITY - One example provides a computing device configured to capture, via the camera, first image data imaging a first physical world location, create a first spatial representation of the first physical world location based on the first image data, receive a user input defining a pose of a first virtual spatial anchor point relative to a feature imaged in the first image data, track user movement to a second physical world location, capture second image data imaging the second physical world location, receive a user input defining a pose of a second virtual spatial anchor point relative to a feature imaged in the second image data, and send, to a remote computing device, data representing the first spatial representation, the pose of first spatial anchor point, the second spatial representation, the pose of second spatial anchor point, and a positional relationship between first and second spatial anchor points. | 2021-11-11 |
20210350613 | SYSTEM FOR ACTIVE-FOCUS PREDICTION IN 360 VIDEO - Aspects of the subject disclosure may include, for example, predicting a field of view of a viewer to obtain a predicted field of view based on information about the viewer and a scoring of a point of interest in media content. A line of sight is obtained between the viewer and a presentation of the media content to obtain a viewer line of sight, and the scoring of the point of interest in the media content is updated to obtain an updated scoring based on the viewer line of sight, the predicted field of view being updated according to the updated scoring. Other embodiments are disclosed. | 2021-11-11 |
20210350614 | SPATIALLY-RESOLVED DYNAMIC DIMMING FOR AUGMENTED REALITY DEVICE - Techniques are described for operating an optical system. In some embodiments, light associated with a world object is received at the optical system. Virtual image light is projected onto an eyepiece of the optical system. A portion of a system field of view of the optical system to be at least partially dimmed is determined based on information detected by the optical system. A plurality of spatially-resolved dimming values for the portion of the system field of view may be determined based on the detected information. The detected information may include light information, gaze information, and/or image information. A dimmer of the optical system may be adjusted to reduce an intensity of light associated with the world object in the portion of the system field of view according to the plurality of dimming values. | 2021-11-11 |
20210350615 | METHODS AND APPARATUS FOR EXTRACTING PROFILES FROM THREE-DIMENSIONAL IMAGES - The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud. A 3D region of interest is determined that includes a width along a first axis, a height along a second axis, and a depth along a third axis. The 3D points within the 3D region of interest are represented as a set of 2D points based on coordinate values of the first and second axes. The 2D points are grouped into a plurality of 2D bins arranged along the first axis. For each 2D bin, a representative 2D position is determined based on the associated set of 2D points. Each of the representative 2D positions are connected to neighboring representative 2D positions to generate the 2D profile. | 2021-11-11 |
20210350616 | SYSTEM AND METHOD FOR ESTIMATING DEPTH UNCERTAINTY FOR SELF-SUPERVISED 3D RECONSTRUCTION - A method is presented. The method includes estimating an ego-motion of an agent based on a current image from a sequence of images and at least one previous image from the sequence of images. Each image in the sequence of images may be a two-dimensional (2D) image. The method also includes estimating a depth of the current image based the at least one previous image. The estimated depth accounts for a depth uncertainty measurement in the current image and the at least one previous image. The method further includes generating a three-dimensional (3D) reconstruction of the current image based on the estimated ego-motion and the estimated depth. The method still further includes controlling an action of the agent based on the three-dimensional reconstruction. | 2021-11-11 |
20210350617 | GENERATING AND VALIDATING A VIRTUAL 3D REPRESENTATION OF A REAL-WORLD STRUCTURE - A computer system maintains structure data indicating geometrical constraints for each structure category of a plurality of structure categories. The computer system generates a virtual 3D representation of a structure based on a set of images depicting the structure. For each image in the set of images, one or more landmarks are identified. Based on the landmarks, a candidate structure category is selected. The virtual 3D representation is generated based on the geometrical constraints of the candidate structure category and the landmarks identified in the set of images. | 2021-11-11 |
20210350618 | SYSTEM AND METHODS FOR IMPROVED AERIAL MAPPING WITH AERIAL VEHICLES - A method for image generation, preferably including: generating a set of mission parameters for a UAV mission of the UAV associated with aerial scanning of a region of interest; controlling the UAV to perform the mission; generating an image subassembly corresponding to the mission; and/or rendering the image subassembly at a display. | 2021-11-11 |
20210350619 | GENERATING CLOTHING PATTERNS OF GARMENT USING BOUNDING VOLUMES OF BODY PARTS - A method and a device for displaying clothing patterns determine attributes of bounding volumes for each body part, to which a body type and an orientation of a 3D avatar are reflected, based on locations of feature points extracted from data of the 3D avatar, determine initial locations of the clothing patterns by placing arrangement points on the bounding volumes depending on the attributes of the bounding volumes for each body part, and drape the clothing patterns to the 3D avatar depending on the initial locations. | 2021-11-11 |
20210350620 | GENERATIVE GEOMETRIC NEURAL NETWORKS FOR 3D SHAPE MODELLING - A method for generating output geometric domain data is disclosed. The geometric decoder method comprises receiving an input comprising at least an input representation and decoding the input to generate an output geometric domain by applying on the input representation at least an intrinsic convolution layer, wherein the intrinsic convolutional layer comprises a consistent local ordering of data points on the geometric domain. | 2021-11-11 |
20210350621 | FAST AND DEEP FACIAL DEFORMATIONS - According to at least one embodiment, a method for generating a mesh deformation of a facial model includes: generating a first plurality of deformation maps by applying a first plurality of neural network-trained models; extracting a first plurality of vertex offsets based on the first plurality of deformation maps; and applying the first plurality of vertex offsets to a neutral mesh of the facial model to generate the mesh deformation of the facial model. | 2021-11-11 |
20210350622 | METHOD AND APPARATUS FOR RECOGNIZING BEHAVIOR AND PROVIDING INFORMATION - A method and apparatus may capture a video of a crowd of people near a first person, transmit the video of the crowd to a second device, and receive, from the second device, an indication that a second person in the crowd appears to be at least one of (a) interested in meeting the first person, and (b) a threat to the first person. A display on the frame may display the received indication to the first person. | 2021-11-11 |
20210350623 | 3D OBJECT RENDERING USING DETECTED FEATURES - An augmented reality display system is configured to use fiducial markers to align 3D content with real objects. The augmented reality display system can optionally include a depth sensor configured to detect a location of a real object. The augmented reality display system can also include a light source configured to illuminate at least a portion of the object with invisible light, and a light sensor configured to form an image using reflected portion of the invisible light. Processing circuitry of the display system can identify a location marker based on the difference between the emitted light and the reflected light and determine an orientation of the real object based on the location of the real object and a location of the location marker. | 2021-11-11 |
20210350624 | SYSTEMS AND METHODS OF CONTROLLING AN OPERATING ROOM DISPLAY USING AN AUGMENTED REALITY HEADSET - Augmented reality (AR) systems and methods involve an interactive head-mounted device (HMD), an external display, and a medical image computer, which is in communication with the HMD and the external display. The external display displays one or more planes of a medical image or a 3D model provided by the medical image computer. A user wearing the HMD may manipulate a medical image or 3D model displayed on the external display by focusing the user's gaze on a control object and/or a portion of a medical image or 3D model displayed on a display of the interactive HMD. | 2021-11-11 |
20210350625 | AUGMENTING LIVE IMAGES OF A SCENE FOR OCCLUSION - An example image processing system augments live images of a scene to reduce or eliminate occlusion of an object of interest. The image processing system can detect an occlusion of an object in a live image of the scene. The image processing system can further access a data store that stores a three-dimensional representation of the scene with the object being present. The image processing system augments the live image to depict the object without at least a portion of the occlusion, using data provided with the three-dimensional representation of the scene. | 2021-11-11 |
20210350626 | METHOD AND SYSTEM FOR AUGMENTED REALITY VISUALISATION - A method for visualizing an image combining an image (Ic) of a real object ( | 2021-11-11 |
20210350627 | EYEWEAR DISPLAY SYSTEM - Provided is an eyewear display system includes: a scanner including a measuring unit configured to acquire three-dimensional coordinates, a point cloud data acquiring unit configured to acquire point cloud data, and a communication unit; an eyewear device including a display, a relative position detection sensor, a relative direction detection sensor, and a communication unit; a storage device including CAD design data of an observation site; and a data processing device configured to synchronize a coordinate space of the scanner, a coordinate space of the eyewear device, and a coordinate space of the CAD design data, and convert observation data OD and/or observation data prediction PD into data in the coordinate space of the CAD design data, such that the eyewear device displays the observation data OD or observation data prediction PD on the display by superimposing the observation data OD or observation data prediction PD on an actual landscape. | 2021-11-11 |
20210350628 | PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING TERMINAL - There is provided a program for causing an information processing terminal to execute a step of acquiring position information indicating a position of the information processing terminal, a step of transmitting the position information to an information processing apparatus, a step of receiving display data related to one or more products which is associated with the position information from the information processing apparatus, a step of acquiring a first region which satisfies a predetermined condition related to safety of a user, the first region being a region in an image taken by an image pickup unit, and a step of outputting the display data related to the one or more products to the acquired first region in a real space corresponding to the image or the image. | 2021-11-11 |
20210350629 | VISUAL LOCALISATION - In an embodiment of the invention there is provided a method of visual localization, comprising: generating a plurality of virtual views, wherein each of the virtual views is associated with a location; obtaining a query image; determining the location where the query image was obtained on the basis of a comparison of the query image with said virtual views. | 2021-11-11 |
20210350630 | OPTIMIZING HEAD MOUNTED DISPLAYS FOR AUGMENTED REALITY - While many augmented reality systems provide “see-through” transparent or translucent displays upon which to project virtual objects, many virtual reality systems instead employ opaque, enclosed screens. Indeed, eliminating the user's perception of the real-world may be integral to some successful virtual reality experiences. Thus, head mounted displays designed exclusively for virtual reality experiences may not be easily repurposed to capture significant portions of the augmented reality market. Various of the disclosed embodiments facilitate the repurposing of a virtual reality device for augmented reality use. Particularly, by anticipating user head motion, embodiments may facilitate scene renderings better aligned with user expectations than naïve renderings generated within the enclosed field of view. In some embodiments, the system may use procedural mapping methods to generate a virtual model of the environment. The system may then use this model to supplement the anticipatory rendering. | 2021-11-11 |
20210350631 | WEARABLE AUGMENTED REALITY DEVICES WITH OBJECT DETECTION AND TRACKING - The technology disclosed can provide capabilities to view and/or interact with the real world to the user of a wearable (or portable) device using a sensor configured to capture motion and/or determining the path of an object based on imaging, acoustic or vibrational waves. Implementations can enable improved user experience, greater safety, greater functionality to users of virtual reality for machine control and/or machine communications applications using wearable (or portable) devices, e.g., head mounted devices (HMDs), wearable goggles, watch computers, smartphones, and so forth, or mobile devices, e.g., autonomous and semi-autonomous robots, factory floor material handling systems, autonomous mass-transit vehicles, automobiles (human or machine driven), and so forth, equipped with suitable sensors and processors employing optical, audio or vibrational detection. | 2021-11-11 |
20210350632 | THREE-DIMENSIONAL FOLDING TOOL - Embodiments of the present invention are directed to facilitating folding of virtual objects via a three-dimensional folding tool. In embodiments, a foldable virtual object having a set of one or more fold lines is presented. A user may select a particular fold line. Based on the user selection, a three-dimensional folding tool is presented in association with the selected fold line. The three-dimensional folding tool can include a first handle on a first panel adjacent to the selected fold line and a second handle on a second panel adjacent to the selected fold line. In accordance with detecting movement of the first handle in a direction, the first panel is folded or rotated about the fold line in the direction of the movement of the first handle, while the position of the second panel is maintained. | 2021-11-11 |
20210350633 | AUGMENTED REALITY SYSTEM AND ANCHOR DISPLAY METHOD THEREOF - An augmented reality system and an anchor display method thereof are provided. An environmental image is captured by an image capturing device disposed on a head-mounted device. A reference image block in the environmental image that matches a display image on a display is detected by performing feature matching between the environmental image and the display image. Position information of the reference image block in the environmental image is obtained. Depth information of the display is obtained according to an actual screen size of the display and a block size of the reference image block in the environmental image. At least one virtual object is displayed by the head-mounted device according to the position information and the depth information. The at least one virtual object is displayed as being anchored to at least one screen bezel of the display. | 2021-11-11 |
20210350634 | GENERATING PHOTOREALISTIC VIEWABLE IMAGES USING AUGMENTED REALITY TECHNIQUES - Methods, systems, computer-readable media, and apparatuses are presented for generating a photorealistic viewable model using augmented reality (AR). An AR scene is generated by overlaying a virtual object onto a view of a physical environment. When placed into the AR scene, the virtual object can interact with the physical environment by, for example, reflecting or taking on colors, shadows, brightness, and other attributes of the physical environment. To generate the viewable model, the virtual object is manipulated (e.g., moved or rotated) within the AR scene and a plurality of images are generated by capturing the virtual object as the virtual object is being manipulated. The viewable model can be generated based on one or more of the images and can be output in the form of an interactive presentation, for example, a spin image. | 2021-11-11 |
20210350635 | DETERMINING VEHICLE SERVICE TIMEFRAMES BASED ON VEHICLE DATA - A device may receive vehicle data from a vehicle telematics device or a client device. The vehicle data may include information relating to a vehicle, a vehicle component, and a sensor associated with the vehicle. The device may determine a vehicle profile, and one or more of a driving behavior and a driving location based on the vehicle data. The vehicle profile may include information relating to a condition of the vehicle component. The device may determine a wear rate for the vehicle component based on the vehicle profile, and one or more of the driving behavior or the driving location. The device may determine a service timeframe for the vehicle component based on the wear rate, the condition of the vehicle component, and a wear threshold. The device may generate a recommendation based on the service timeframe, and transmit the recommendation to the client device. | 2021-11-11 |
20210350636 | DEEP LEARNING OF FAULT DETECTION IN ONBOARD AUTOMOBILE SYSTEMS - Methods and systems for vehicle fault detection include collecting operational data from sensors in a vehicle. The sensors are associated with vehicle sub-systems. The operational data is processed with a neural network to generate a fault score, which represents a similarity to fault state training scenarios, and an anomaly score, which represents a dissimilarity to normal state training scenarios. The fault score is determined to be above a fault score threshold and the anomaly score is determined to be above an anomaly score threshold to detect a fault. A corrective action is performed responsive the fault, based on a sub-system associated with the fault. | 2021-11-11 |
20210350637 | DEVICE AND SOFTWARE ENABLING VEHICLE-TO-EVERYTHING CAPABILITIES IN ON-BOARD DIAGNOSTICS BASED VEHICLES AND A PLATFORM ACCESSING AND MANAGING DATA OF NEARBY VEHICLE-TO-EVERYTHING CAPABLE VEHICLES - A device and software, which the device contains, allows a broad range of vehicles to connect to and transfer vehicle data between each of the vehicles, regardless of the make and manufacturer of the vehicles. The device comprises computing devices, software and software controls, modules and sensors such as a Wireless Vehicle-to-Infrastructure (C-V2X) Module, an Inertial Measurement Unit, a Global Navigation Satellite System (GNSS)/Global Positioning System (GPS) Module, a Memory Storage Unit, a wireless communication module, an Inertial Module Unit, a wireless triangulation module, and a plurality of antennas and a separate OBD-II port adapter. The graphical user interface of the device displays information from other vehicles collected through the device and accessed through the devices available software Application Programming Interface (API). | 2021-11-11 |
20210350638 | TRAILER DIAGNOSTIC AND MONITORING SYSTEM - A trailer diagnostic and monitoring system includes a system controller having a controller body, and a power unit for engage a wiring harness of the trailer. A location identification unit and wireless communication device are positioned within the controller body, and a plurality of hub sensor assemblies are positioned along the axle hubs of the trailer. Each of the assemblies including functionality for monitoring and reporting the temperature of a respective axle hub, and the pressure of the trailer tire secured to the respective axle hub. A process specific serial forwarder device communicatively links each of the hub sensor assemblies to the system controller. A trailer diagnostic and monitoring application for execution on a remote computing device includes functionality for communicating with the wireless communication device to receive trailer location information, tire pressure readings and axle hub temperatures. | 2021-11-11 |
20210350639 | SYSTEM AND METHOD FOR AUTHENTICATION QUEUING IN ACCESS CONTROL SYSTEMS - A method for authentication queuing in access control systems is provided. The method may include establishing, by an access control system, a first wireless connection with a mobile device. The method further includes authenticating the mobile device over the first wireless connection. The method also includes adding the mobile device to a first authenticated devices queue associated with a first physical access point. The method includes sending a reconnection parameter to the mobile device. The method additionally includes disconnecting the first wireless connection to the mobile device. | 2021-11-11 |
20210350640 | A SYSTEM AND SCANNING DEVICE FOR GRANTING USER ACCESS USING A BLUETOOTH LOW ENERGY (BLE) MESH - The present disclosure relates to the field of lighting and more specifically, to a Bluetooth low energy mesh enabled adaptive light emitting diode (LED) driver with wireless battery powered switches for lighting control. In an aspect, the LED driver can include a rectifier ( | 2021-11-11 |
20210350641 | SYSTEM AND METHOD FOR GRANTING ACCESS TO OR FOR STARTING AN OBJECT - Disclosed is a system comprising a portable electronic device, and a backend device, wherein the portable electronic device is configured to, upon occurrence of a triggering event, send a first request message to the backend device, the backend device, upon receipt of the first request message, is configured to read an internal device clock and send a first response message to the portable electronic device, wherein the first response message comprises information about the present internal device time. The portable electronic device, upon receipt of the first response message, is further configured to at least one of update an internal clock of the portable electronic device with the present internal device time, and send a second response message to an object, wherein the second response message comprises an information about the present internal device time. | 2021-11-11 |
20210350642 | MULTIFUNCTION SMART DOOR DEVICE - Multifunction smart door devices may be part of a system of multifunction smart door devices installed within or near stateroom doors of a cruise ship. Each smart door device can control access to a stateroom based on facial recognition or a wireless credential and can perform other functions such as controlling stateroom personalization features, providing an electronic peephole function, allowing controlled access for authorized crew members, accommodating remote unlocking, and providing notifications. Data obtained by the smart door devices can be provided to the cruise operator for service, safety, or security purposes, such as for anonymized foot traffic analysis, hazard detection, and stateroom access auditing. Smart door device functionality may be implemented in part by customers' mobile devices. | 2021-11-11 |
20210350643 | SELF-SERVICE MODULAR DROP SAFES WITH DEPOSIT CREATION CAPABILITY - Novel modular smart management devices in the form of drop safes include the modular components of a chassis, door and technology cabinet. The drop safes enable retailers to make cash deposits quickly and safely within or near their own facilities. Various technology, including RFID readers, RFID tags, and other equipment allow the drop safes to identify each deposited bag. Employees utilize specialized apps on their mobile devices to facilitate deposit creation and other tasks. Novel methodologies for accessing the drop safes for emptying employ single-use, time-expiration type authorization codes along with other security measures to minimize risk and to provide other benefits. Novel structures along with methodologies for replacing, on-site, modular components with auto-detection of functionality during initialization and re-initialization enables for efficient replacement and upgrading of components, including the upgrading of safes to provide additional functionality. | 2021-11-11 |
20210350644 | SELF-SERVICE MODULAR DROP SAFES WITH TECHNOLOGY SHELF REPLACEMENT CAPABILITY - Novel modular smart management devices in the form of drop safes include the modular components of a chassis, door and technology cabinet. The drop safes enable retailers to make cash deposits quickly and safely within or near their own facilities. Various technology, including RFID readers, RFID tags, and other equipment allow the drop safes to identify each deposited bag. Employees utilize specialized apps on their mobile devices to facilitate deposit creation and other tasks. Novel methodologies for accessing the drop safes for emptying employ single-use, time-expiration type authorization codes along with other security measures to minimize risk and to provide other benefits. Novel structures along with methodologies for replacing, on-site, modular components with auto-detection of functionality during initialization and re-initialization enables for efficient replacement and upgrading of components, including the upgrading of safes to provide additional functionality. | 2021-11-11 |