51st week of 2021 patent applcation highlights part 52 |
Patent application number | Title | Published |
20210397916 | METHODS AND SYSTEMS FOR PROCESSING AN IMAGE - A system performs a method for processing an image of a machine-readable code. The method includes receiving an image of a machine-readable code comprising coded information, where the machine-readable code is at least partially obscured by a substance that has a predominant color; generating an adjusted image by adjusting a color space of the image based on the predominant color; binarizing at least a machine-readable code region of the image, wherein the machine-readable code region of the image depicts the machine-readable code; and decoding the binarized machine-readable code region to determine the coded information. Other apparatus and methods are also described. | 2021-12-23 |
20210397917 | COLOR CHANGING STORAGE DEVICE HOUSING - Systems and methods are disclosed for using a color changing surface to display a status of a storage device. In certain embodiments, a storage includes a display-less enclosure, non-volatile memory, memory configured to store firmware, and control circuitry. The control circuitry can be configured to determine an available space in the non-volatile memory, determine a first color corresponding to the available space based on a mapping of ranges of available space to corresponding colors, apply a voltage to the electrochromic material to change the color changing surface to the first color, and cease application of the voltage to the electrochromic material, wherein the color changing surface retains the first color after cessation of the voltage. | 2021-12-23 |
20210397918 | INFORMATION CARD OBJECT COUNTER - The apparatus may include a microprocessor. In electronic communication with the microprocessor there may be a memory cell. In electronic communication with the microprocessor there may be a light source circuit. In electronic communication with the microprocessor there may be a camera circuit. In electronic communication with the microprocessor there may be a nano light-emitting diode display circuit. Stored in the memory cell there may be image-processing instructions. Stored in the memory cell there may be light-source control instructions. The memory cell; the light source circuit; the camera circuit; and the nano light-emitting diode display circuit may be embedded in an information card. The instructions may be configured to cause the microprocessor to count objects set in motion by a user. The motion may be a motion of manually flicked objects. | 2021-12-23 |
20210397919 | RFID DEVICE - The present invention disclosed a radio-frequency identification (RFID) device. The RFID device comprises a protective body and a RFID circuit unit located inside the protective body. The RFID circuit unit comprises a printed circuit board (PCB), a RFID chip and an antenna disposed thereon. A front surface of the PCB faces a top surface of the protective body; a rear surface of the PCB faces a bottom surface of the protective body. | 2021-12-23 |
20210397920 | DEVICE AND METHOD FOR DETECTING OPENING OF OR AN ATTEMPT TO OPEN A CLOSED CONTAINER - A closed container includes a detection device for detecting opening of or an attempt to open the container. The detection device includes a contactless passive transponder that is configured to communicate with a reader via an antenna using a carrier signal. An integrated circuit of the transponder includes two input terminals connected to the antenna and two output terminals linked by a first electrically conductive wire having a severable part which is severed in the event of an opening of or an attempted opening of the container. A shorting circuit is configured to short-circuit a first output terminal with a first input terminal in the event of a conductive repair of the severed part which forms an electrical connection between the two output terminals. | 2021-12-23 |
20210397921 | CHIP COUNTER FOR SEMICONDUCTOR CHIP-MOUNTED TAPE REEL - The present invention relates to a chip counter, which transmits an X-ray beam through a tape reel around which a tape having a plurality of semiconductor chips mounted in a row therein is wound, acquires an image scattered or diffracted by the semiconductor chips, and processes the acquired image, so as to count the number of the semiconductor chips, wherein: the X-ray beam transmitted through the tape reel ( | 2021-12-23 |
20210397922 | SYSTEM AND METHODS FOR CREATION OF LEARNING AGENTS IN SIMULATED ENVIRONMENTS - A system and methods for generating and applying learning agents in simulated environments, in which an agent simulation is selected, one or more agent goals are received, and agents are created which are individual instances of the agent simulation with each agent having at least one of the agent goals, wherein the agents are used in the execution of an environment simulation which dynamically changes based on the collective behavior of the agents. | 2021-12-23 |
20210397923 | INFORMATION PROCESSING DEVICE, REGRESSION MODEL GENERATION METHOD, AND REGRESSION MODEL GENERATION PROGRAM PRODUCT - An information processing device: acquires data and prepares a data set with the acquired data; generates a regression model using the data set; calculates an optimum solution from the generated regression model; repetitively acquires the data based on the calculated optimum solution and updating the data set; and repetitively generates the regression model using the updated data set. The acquiring of data and the preparing of data set are executed based on a different criterion in response to a satisfaction of a predetermined condition, and the generating of regression model is executed using the data set prepared based on the different criterion. | 2021-12-23 |
20210397924 | WEB CRAWLER DETECTION METHOD, SYSTEM AND DEVICE BASED ON GRAPH NEURAL NETWORK - The present disclosure discloses a web crawler detection method, system and device based on a graph neural network. In some embodiments, the web crawler detection method includes: acquiring a web session sample, the web session sample including a plurality of resources accessed; extracting a resource feature of each of the plurality of resources accessed in the web session sample, the resource feature including one or more of an essential feature embodied by the resource in a website and a session feature of a user accessing the resource; and building a resource graph of the web session sample based on the resource feature, extracting a graph feature of the resource graph by using a preset graph algorithm; training a classification model according to the graph feature to obtain a trained classification model; and using the trained classification model to detect a web crawler. | 2021-12-23 |
20210397925 | CONVOLUTIONAL NEURAL NETWORK OPTIMIZATION MECHANISM - A library of machine learning primitives is provided to optimize a machine learning model to improve the efficiency of inference operations. In one embodiment a trained convolutional neural network (CNN) model is processed into a trained CNN model via pruning, convolution window optimization, and quantization. | 2021-12-23 |
20210397926 | DATA REPRESENTATIONS AND ARCHITECTURES, SYSTEMS, AND METHODS FOR MULTI-SENSORY FUSION, COMPUTING, AND CROSS-DOMAIN GENERALIZATION - A computer-implemented method, computer system and machine readable medium. The method includes performing a set of parameterizations of a plurality of semantic concepts, each parameterization of the set including: receiving existing data at a computer system on the plurality of semantic concepts, the existing data including processed output data from a plurality of neural network-based computing systems (NNBCSs), the processed output data corresponding to a plurality of distinct data domains associated with respective ones of the NNBCSs; generating a data structure to define a continuous vector space of a digital knowledge graph (DKG) based on the existing data; and storing the data structure in the memory circuitry; and in response to a determination that error rates from a processing of data sets by the plurality of NNBCSs are below respective predetermined thresholds, generating a training model. | 2021-12-23 |
20210397927 | NEURAL NETWORK SYSTEM AND METHOD OF OPERATING THE SAME - A neural network system includes at least one memory and at least one processor. The memory is configured to store a front-end neural network, an encoding neural network, a decoding neural network and a back-end neural network. The processor is configured to execute the front-end neural network, the encoding neural network, the decoding neural network and the back-end neural network in the memory to perform operations including: utilizing the front-end neural network to output feature data; utilizing the encoding neural network to compress the feature data, and output compressed data which correspond to the feature data; utilizing the decoding neural network to decompress the compressed data, and output decompressed data which correspond to the feature data; and utilizing the back-end neural network to perform corresponding operations based on the decompressed data. A method of operating a neural network system is also disclosed herein. | 2021-12-23 |
20210397928 | DEVICE, METHOD AND STORAGE MEDIUM FOR ACCELERATING ACTIVATION FUNCTION - A device, a method and a storage medium for accelerating activation function in relation to data processing by artificial neural network provides a register for storing a storage table, a matching unit including a plurality of comparators, a logic unit, and a selection unit. The comparators compare an input variable of the activation function with the variable intervals of the activation function to obtain a comparison output result, the logic unit performs a logical operation according to the comparison output result to obtain a logic output result and determines a variable interval to be calculated according to the logic output. The selection unit queries the storage table according to the variable interval to be calculated and obtains parameters of fitted quadratic function. A calculation unit performs calculations on the input variable according to the parameters. | 2021-12-23 |
20210397929 | Wearable Electronic Device with Built-in Intelligent Monitoring Implemented using Deep Learning Accelerator and Random Access Memory - Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, a wearable electronic device may be configured to execute instructions with matrix operands and configured with: a housing to be worn on a person; a sensor having one or more sensor elements generate measurements associated with the person; random access memory to store instructions executable by the Deep Learning Accelerator and store matrices of an Artificial Neural Network; a transceiver; and a controller to monitor an output of the Artificial Neural Network, generated using the measurements as an input to the Artificial Neural Network. Based on the output, the controller may control selective storage of measurement data from the sensor, and/or selective communication of data from the wearable electronic device to a separate computer system. | 2021-12-23 |
20210397930 | ACCELERATING BINARY NEURAL NETWORKS WITHIN LATCH STRUCTURE OF NON-VOLATILE MEMORY DEVICES - A non-volatile memory device includes an array of non-volatile memory cells that are configured to store weights of a neural network. Associated with the array is a data latch structure that includes a page buffer, which can store weights for a layer of the neural network that is read out of the array, and a transfer buffer, that can store inputs for the neural network. The memory device can perform multiply and accumulate operations between inputs and weight of the neural network within the latch structure, avoiding the need to transfer data out of the array and associated latch structure for portions of an inference operation. By using binary weights and inputs, multiplication can be performed by bit-wise XNOR operations. The results can then be summed and activation applied, all within the latch structure. | 2021-12-23 |
20210397931 | RECURRENT NEURAL NETWORK INFERENCE ENGINE WITH GATED RECURRENT UNIT CELL AND NON-VOLATILE MEMORY ARRAYS - A non-volatile memory device includes arrays of non-volatile memory cells that are configured to the store weights for a recurrent neural network (RNN) inference engine with a gated recurrent unit (GRU) cell. A set three non-volatile memory arrays, such as formed of storage class memory, store a corresponding three sets of weights and are used to perform compute-in-memory inferencing. The hidden state of a previous iteration and an external input are applied to the weights of the first and the of second of the arrays, with the output of the first array used to generate an input to the third array, which also receives the external input. The hidden state of the current generation is generated from the outputs of the second and third arrays. | 2021-12-23 |
20210397932 | METHODS OF PERFORMING PROCESSING-IN-MEMORY OPERATIONS, AND RELATED DEVICES AND SYSTEMS - Methods, apparatuses, and systems for in-or near-memory processing are described. Bits of a first number may be stored on a number of memory elements, wherein each memory element of the number of memory elements intersects a bit line and a word line of a number of word lines. A number of signals corresponding to bits of a second number may be driven on the number of word lines to generate a number of output signals. A value equal to a product of the first number and the second number may be generated based on the number of output signals. | 2021-12-23 |
20210397933 | CONVOLUTION ACCELERATION WITH EMBEDDED VECTOR DECOMPRESSION - Techniques and systems are provided for implementing a convolutional neural network. One or more convolution accelerators are provided that each include a feature line buffer memory, a kernel buffer memory, and a plurality of multiply-accumulate (MAC) circuits arranged to multiply and accumulate data. In a first operational mode the convolutional accelerator stores feature data in the feature line buffer memory and stores kernel data in the kernel data buffer memory. In a second mode of operation, the convolutional accelerator stores kernel decompression tables in the feature line buffer memory. | 2021-12-23 |
20210397934 | NEURAL NETWORK COMPUTING DEVICE AND CACHE MANAGEMENT METHOD THEREOF - A neural network computing device and a cache management method thereof are provided. The neural network computing device includes a computing circuit, a cache circuit and a main memory. The computing circuit performs a neural network calculation including a first layer calculation and a second layer calculation. After the computing circuit completes the first layer calculation and generates a first calculation result required for the second layer calculation, the cache circuit retains the first calculation result in the cache circuit until the second layer calculation is completed. After the second layer calculation is completed, the cache circuit invalidates the first calculation result retained in the cache circuit to prevent the first calculation result from being written into the main memory. | 2021-12-23 |
20210397935 | METHOD, ACCELERATOR, AND ELECTRONIC DEVICE WITH TENSOR PROCESSING - A processor-implemented tensor processing method includes: receiving a request to process a neural network including a normalization layer by an accelerator; and generating an instruction executable by the accelerator in response to the request, wherein, by executing the instruction, the accelerator is configured to determine an intermediate tensor corresponding to a result of performing a portion of operations included in the normalization layer, by performing, in a channel axis direction, a convolution based on: a target tensor on which the portion of operations is to be performed; and a kernel having a number of input channels and a number of output channels determined based on the target tensor and including elements of scaling values determined based on the target tensor. | 2021-12-23 |
20210397936 | INTEGRATED MEMORY SYSTEM FOR HIGH PERFORMANCE BAYESIAN AND CLASSICAL INFERENCE OF NEURAL NETWORKS - A memory module system for a high-dimensional weight space neural network configured to process machine learning data streams using Bayesian Inference and/or Classical Inference is set forth. The memory module can include embedded high speed random number generators (RNGs). The memory module is configured to compute, store and sample neural network weights by adapting operating precision to optimize the computing effort based on available weight space and application specifications. | 2021-12-23 |
20210397937 | CHARGE-PUMP-BASED CURRENT-MODE NEURON FOR MACHINE LEARNING - A compute-in-memory array is provided in which each neuron includes a capacitor and an output transistor. During an evaluation phase, a filter weight voltage and the binary state of an input bit controls whether the output transistor conducts or is switched off to affect a voltage of a read bit line connected to the output transistor. | 2021-12-23 |
20210397938 | DETECTION DEVICE AND DETECTION PROGRAM - A preprocessing unit ( | 2021-12-23 |
20210397939 | Discrete Three-Dimensional Processor - A discrete three-dimensional (3-D) processor comprises first and second dice. The first die comprises 3-D memory (3D-M) arrays and in-die peripheral-circuit components thereof, whereas the second die comprises processing circuits and off-die peripheral-circuit components of the 3D-M arrays. The first and second dice are communicatively coupled by a plurality of inter-die connections. | 2021-12-23 |
20210397940 | BEHAVIOR MODELING USING CLIENT-HOSTED NEURAL NETWORKS - Apparatuses, systems, and techniques to detect abnormal behavior on clients using one or more neural networks on said clients. In at least one embodiment, use of or behavior on one or more clients is analyzed by a first neural network to detect abnormal behavior compared to a baseline of accepted behavior, and said baseline of accepted behavior is revised over time by a second neural network based on behavior observed on said one or more clients. | 2021-12-23 |
20210397941 | TASK-ORIENTED MACHINE LEARNING AND A CONFIGURABLE TOOL THEREOF ON A COMPUTING ENVIRONMENT - A task-based learning using task-directed prediction network can be provided. Training data can be received. Contextual information associated with a task-based criterion can be received. A machine learning model can be trained using the training data. A loss function computed during training of the machine learning model integrates the task-based criterion, and minimizing the loss function during training iterations includes minimizing the task-based criterion. | 2021-12-23 |
20210397942 | LEARNING TO SEARCH USER EXPERIENCE DESIGNS BASED ON STRUCTURAL SIMILARITY - Embodiments are disclosed for learning structural similarity of user experience (UX) designs using machine learning. In particular, in one or more embodiments, the disclosed systems and methods comprise generating a representation of a layout of a graphical user interface (GUI), the layout including a plurality of control components, each control component including a control type, geometric features, and relationship features to at least one other control component, generating a search embedding for the representation of the layout using a neural network, and querying a repository of layouts in embedding space using the search embedding to obtain a plurality of layouts based on similarity to the layout of the GUI in the embedding space. | 2021-12-23 |
20210397943 | TECHNIQUES FOR CLASSIFICATION WITH NEURAL NETWORKS - Apparatuses, systems, and techniques to train neural networks to perform classification. In at least one embodiment, one or more neural networks are trained to perform classification based on, for example, using one or more compressed representations of one or more class labels, where the one or more compressed representations have fewer bits than a representation of the one or more class labels. | 2021-12-23 |
20210397944 | Automated Structured Textual Content Categorization Accuracy With Neural Networks - To provide automated categorization of structured textual content individual nodes of textual content, from a document object model encapsulation of the structured textual content, have a multidimensional vector associated with them, where the values of the various dimensions of the multidimensional vector are based on the textual content in the corresponding node, the visual features applied or associated with the textual content of the corresponding node, and positional information of the textual content of the corresponding node. The multidimensional vectors are input to a neighbor-imbuing neural network. The enhanced multidimensional vectors output by the neighbor-imbuing neural network are then be provided to a categorization neural network. The resulting output can be in the form of multidimensional vectors whose dimensionality is proportional to categories into which the structured textual content is to be categorized. A weighted merge takes into account multiple nodes that are grouped together. | 2021-12-23 |
20210397945 | DEEP HIERARCHICAL VARIATIONAL AUTOENCODER - One embodiment of the present invention sets forth a technique for performing machine learning. The technique includes inputting a training dataset into a variational autoencoder (VAE) comprising an encoder network, a prior network, and a decoder network. The technique also includes training the VAE by updating one or more parameters of the VAE based on a smoothness of one or more outputs produced by the VAE from the training dataset. The technique further includes producing generative output that reflects a first distribution of the training dataset by applying the decoder network to one or more values sampled from a second distribution of latent variables generated by the prior network. | 2021-12-23 |
20210397946 | METHOD AND APPARATUS WITH NEURAL NETWORK DATA PROCESSING - A processor-implemented neural network data processing method includes: determining a total number of either one of a first feature value and values less than or equal to the first feature value, in feature data output from a layer of a neural network; determining a quantization parameter based on the determined number; quantizing the feature data based on the determined quantization parameter; and inputting the quantized feature data to a another layer of the neural network connected to the layer. | 2021-12-23 |
20210397947 | METHOD AND APPARATUS FOR GENERATING MODEL FOR REPRESENTING HETEROGENEOUS GRAPH NODE - Embodiments of the present disclosure provide a method for generating a model for representing heterogeneous graph node. A specific implementation includes: acquiring a training data set, wherein the training data set includes node walk path information obtained by sampling a heterogeneous graph according to different meta paths; and training, based on a gradient descent algorithm, an initial heterogeneous graph node representation model with the training data set as an input of the initial heterogeneous graph node representation model, to obtain a heterogeneous graph node representation model. | 2021-12-23 |
20210397948 | LEARNING METHOD AND INFORMATION PROCESSING APPARATUS - A memory holds a model including a plurality of layers including their respective parameters and training data. A processor starts learning processing, which repeatedly calculates an error of an output of the model by using the training data, calculates an error gradient, which indicates a gradient of the error with respect to the parameters, for each of the layers, and updates the parameters based on the error gradients. The processor calculates a difference between a first error gradient calculated in a first iteration in the learning processing and a second error gradient calculated in a second iteration after the first iteration for a first layer among the plurality of layers. In a case where the difference is less than a threshold, the processor skips the calculating of the error gradient and the updating of the parameter for the first layer in a third iteration after the second iteration. | 2021-12-23 |
20210397949 | MATERIALS ARTIFICIAL INTELLIGENCE ROBOTICS-DRIVEN METHODS AND SYSTEMS - A system, computer program product and a method to predict an objective function based on a recipe, and generate, via a machine learning model, a plurality of proposed different recipes of battery materials for optimizing at least objective function. Instances of the different proposed recipes of battery materials are prepared and deposited into an electrochemical module by a robotic preparation module. A robotic testing module executes a plurality of formulation characteristic tests on each deposited recipe instance and updates the machine learning model with a result of at least one of the formulation characteristic tests. | 2021-12-23 |
20210397950 | ABNORMAL DRIVING STATE DETERMINATION DEVICE AND METHOD USING NEURAL NETWORK MODEL - The present invention relates to an abnormal driving state determination device using a neural network model, and the device can comprise: an abnormal driving state data generation unit for generating abnormal driving state data on the basis of information related to an abnormal driving state; an abnormal driving state learning unit which receives the abnormal driving state data so as to visualize the abnormal driving state data through a visualization algorithm, thereby allowing the abnormal driving state data to be learned by a neural network model; and an abnormal driving state determination unit including the neural network model for determining an abnormal state on the basis of the learned abnormal driving state data. | 2021-12-23 |
20210397951 | DATA PROCESSING APPARATUS, DATA PROCESSING METHOD, AND PROGRAM - A data processing apparatus according to a first aspect of the present invention includes: a first generation section that generates first input data in which first data related to a first phenomenon and second data related to a second phenomenon that is relevant to the first phenomenon are combined with first auxiliary data that is based on a missing data status in at least one of the first data and the second data; and a learning section that learns a model parameter of a prediction model, based on an error according to the first auxiliary data between output data outputted from the prediction model when the first input data is inputted into the prediction model, and each of the first data and the second data. | 2021-12-23 |
20210397952 | SYSTEMS AND METHODS FOR DECODING CODE-MULTIPLEXED COULTER SIGNALS USING MACHINE LEARNING - Systems and methods for decoding code-multiplexed Coulter signals are described herein. An example method can include receiving a code-multiplexed signal detected by a network of Coulter sensors, where the code-multiplexed signal includes a plurality of distinct Coulter signals, and inputting the code-multiplexed signal into a deep-learning network. The method can also include determining information indicative of at least one of a size, a speed, or a location of a particle detected by the network of Coulter sensors by using the deep-learning network to process the code-multiplexed signal. The method can further include storing the information indicative of at least one of the size, the speed, or the location of the particle detected by the network of Coulter sensors. | 2021-12-23 |
20210397953 | DEEP NEURAL NETWORK OPERATION METHOD AND APPARATUS - A deep neural network operation method and apparatus are provided. The method comprises: obtaining an input feature map of a network layer; displacing respectively, according to a preset displacement parameter, each of channels of the input feature map of the network layer along axes, to obtain a displaced feature map, wherein the preset displacement parameter comprises displacement amounts of the channel in the axes; and performing convolution operation on the displaced feature map with a 1×1 convolution kernel to obtain an output feature map of the network layer. The operation efficiency of the DNN can be improved through the above method. | 2021-12-23 |
20210397954 | TRAINING DEVICE AND TRAINING METHOD - A training device or the like according to the present disclosure trains a fusion deep neural network (DNN) model by (i) using training data that includes two or more modal information items and ground truth labels of the two or more modal information items and (ii) performing knowledge distillation that is a technique in which knowledge obtained as a result of a teacher model being trained is used to train a student model. The fusion DNN model includes: two or more DNNs; and a fusion that includes a configuration in which portions of the two or more DNNs are fused and that receives an input of features that are outputs of the two or more DNNs. | 2021-12-23 |
20210397955 | MAKING TIME-SERIES PREDICTIONS OF A COMPUTER-CONTROLLED SYSTEM - A computer-implemented method of training a model for making time-series predictions of a computer-controlled system. The model uses a stochastic differential equation (SDE) comprising a drift component and a diffusion component. The drift component has a predefined part representing domain knowledge, that is received as an input to the training; and a trainable part. When training the model, values of the set of SDE variables at a current time point are predicted based on their values at a previous time point, and based on this, the model is refined. In order to predict the values of the set of SDE variables, the predefined part of the drift component is evaluated to get a first drift, and the first drift is combined with a second drift obtained by evaluating the trainable part of the drift component. | 2021-12-23 |
20210397956 | ACTIVITY LEVEL MEASUREMENT USING DEEP LEARNING AND MACHINE LEARNING - There is provided a method for assessing an activity level of an entity. The method includes (i) receiving source data from a source about a plurality of entities, (ii) analyzing the source data to produce (a) a source data assessment that indicates whether to include the source data in a scored data set, and (b) a calculated accuracy that is a weighted accuracy assessment of the source data, (iii) receiving entity data about an entity of interest, (iv) generating, from the entity data and the calculated accuracy, an entity description that represents attributes of the entity of interest, (v) analyzing the source data assessment and the entity description to produce an activity score that is an estimate of an activity level of the entity of interest, and (vi) issuing a recommendation concerning treatment of the entity of interest based on the activity score. | 2021-12-23 |
20210397957 | MULTI-PROCESSOR TRAINING OF NEURAL NETWORKS - The subject technology provides a framework for multi-processor training of neural networks. Multi-processor training of neural networks can include performing a forward pass of a training iteration using a neural processor, and performing a backward pass of the training iteration using a CPU or a GPU. Additional operations for facilitating the multi-processor training are disclosed. | 2021-12-23 |
20210397958 | METHODS FOR IDENTIFICATION USING DISTRIBUTED TEMPORAL NEURAL NETS - PRACTICAL CROWD SOURCED TEMPORAL NEURAL SYSTEM - A network of contributing and collecting temporal state-machine neurons, the state-machine neurons manifested as a physical machine deployed on a plurality of networked computing devices. The temporal neurons adapt their firing behavior based on the temporal pattern of their received inputs, the firings of contributing neurons. The temporal neurons fire based on received inputs received within a defined temporal period. In one example, the temporal position of the inputs are adjusted, based on earlier inputs received within the defined temporal period. In one example, the temporal positions of the inputs, the firings, are progressively adjusted so as to eventually align their firings occurring within the defined temporal window. A cluster of neurons is packaged into a data structure of information on neuron connections, the delay factors to adjust the temporal positions, defined temporal period, and firing threshold information, which becomes portable for transmission and use elsewhere. | 2021-12-23 |
20210397959 | TRAINING REINFORCEMENT LEARNING AGENTS TO LEARN EXPERT EXPLORATION BEHAVIORS FROM DEMONSTRATORS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network used to select actions performed by an agent interacting with an environment by performing actions that cause the environment to transition states. One of the methods includes obtaining a transition generated as a result of the reinforcement learning agent interacting with the environment, processing a bonus input using a bonus estimation neural network to generate an exploration bonus estimate that encourages the agent to explore the environment in accordance with an expert exploration strategy that would be adopted by an expert agent; generating a modified reward from the reward included in the transition and the exploration bonus estimate; and determining an update to current parameter values of the neural network to optimize a reinforcement learning objective function that maximizes returns to be received by the agent with respect to the modified reward. | 2021-12-23 |
20210397960 | RELIABILITY EVALUATION DEVICE AND RELIABILITY EVALUATION METHOD - A reliability evaluation device includes: a training data storing unit for storing training data constituted by a set of data and a label, the label being information relating to the data and assigned to identify an object to be identified; a learning unit for performing a dropout process on a neural network model to be learned by applying a preset dropout parameter, repeating learning for classifying the label by using the training data, and performing iterative learning until the learning converges; a model reconstructing unit for reconstructing a learned model in accordance with the dropout parameter and generating a plurality of different reconstructed models, the learned model being a neural network model for which the iterative learning has converged; an identification unit for identifying the training data by using the generated reconstructed models, and estimating a label for each of the reconstructed models; and a classification determining unit for evaluating a label of the training data on the basis of the estimated labels, and classifying the label of the training data. | 2021-12-23 |
20210397961 | METHOD AND SYSTEM FOR TRAINING AUTONOMOUS DRIVING AGENT ON BASIS OF DEEP REINFORCEMENT LEARNING - Disclosed are a method and a system for training an autonomous driving agent on the basis of deep reinforcement learning (DRL). The agent training method according to one embodiment may comprise a step of training an agent through an actor-critic algorithm in a simulation for DRL. The step of training may include inputting first information to an actor network to determine an action of the agent, and inputting second information to a critic to evaluate how helpful the action is to maximizing a reward in the actor-critic algorithm, the second information comprising the first information and additional information. | 2021-12-23 |
20210397962 | EFFECTIVE NETWORK COMPRESSION USING SIMULATION-GUIDED ITERATIVE PRUNING - The effective network compression using simulation-guided iterative pruning according to various embodiments, can be configured so that, by means of an electronic device, a first neural network is pruned on the basis of a threshold value, a second neural network is generated, a gradient for each weighted value of the second neural network is calculated, and a third neural is acquired by applying the gradient to the first neural network. | 2021-12-23 |
20210397963 | METHOD AND APPARATUS FOR NEURAL NETWORK MODEL COMPRESSION WITH MICRO-STRUCTURED WEIGHT PRUNING AND WEIGHT UNIFICATION - A method of neural network model compression is performed by at least one processor and includes receiving an input neural network and an input mask, and reducing parameters of the input neural network, using a deep neural network that is trained by selecting pruning micro-structure blocks to be pruned, from a plurality of blocks of input weights of the deep neural network that are masked by the input mask, pruning the input weights, based on the selected pruning micro-structure blocks, selecting unification micro-structure blocks to be unified, from the plurality of blocks of the input weights masked by the input mask, and unifying multiple weights in one or more of the plurality of blocks of the pruned input weights, based on the selected unification micro-structure blocks, to obtain pruned and unified input weights of the deep neural network. | 2021-12-23 |
20210397964 | RESILIENCE DETERMINATION AND DAMAGE RECOVERY IN NEURAL NETWORKS - Disclosed herein include systems, devices, computer readable media, and methods for resilience determination and damage recovery in neural networks using a weight space and a metric that together form a manifold (such as a pseudo-Riemannian manifold or a Riemannian manifold) | 2021-12-23 |
20210397965 | Graph Diffusion for Structured Pruning of Neural Networks - An apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to: estimate an importance of parameters of a neural network based on a graph diffusion process over at least one layer of the neural network; determine the parameters of the neural network that are suitable for pruning or sparsification; remove neurons of the neural network to prune or sparsify the neural network; and provide at least one syntax element for signaling the pruned or sparsified neural network over a communication channel, wherein the at least one syntax element comprises at least one neural network representation syntax element. | 2021-12-23 |
20210397966 | SYSTEMS AND METHODS FOR IMAGE SEGMENTATION - Described herein are neural network-based systems, methods and instrumentalities associated with image segmentation that may be implementing using an encoder neural network and a decoder neural network. The encoder network may be configured to receive a medical image comprising a visual representation of an anatomical structure and generate a latent representation of the medical image indicating a plurality of features of the medical image. The latent representation may be used by the decoder network to generate a mask for segmenting the anatomical structure from the medical image. The decoder network may be pre-trained to learn a shape prior associated with the anatomical structure and once trained, the decoder network may be used to constrain an output of the encoder network during training of the encoder network. | 2021-12-23 |
20210397967 | DRIFT REGULARIZATION TO COUNTERACT VARIATION IN DRIFT COEFFICIENTS FOR ANALOG ACCELERATORS - Drift regularization is provided to counteract variation in drift coefficients in analog neural networks. In various embodiments, a method of training an artificial neural network is illustrated. A plurality of weights is randomly initialized. Each of the plurality of weights corresponds to a synapse of an artificial neural network. At least one array of inputs is inputted to the artificial neural network. At least one array of outputs is determined by the artificial neural network based on the at least one array of inputs and the plurality of weights. The at least one array of outputs is compared to ground truth data to determine a first loss. A second loss is determined by adding a drift regularization to the first loss. The drift regularization is positively correlated to variance of the at least one array of outputs. The plurality of weights is updated based on the second loss by backpropagation. | 2021-12-23 |
20210397968 | BACKPROPAGATION OF ERRORS IN PULSED FORM IN A PULSED NEURAL NETWORK - A new implementation is provided for an error back-propagation algorithm that is suited to the hardware constraints of a device implementing a spiking neural network. The invention notably uses binary or ternary encoding of the errors calculated in the back-propagation phase to adapt its implementation to the constraints of the network, and thus to avoid having to use floating-point number multiplication operators. More generally, the invention proposes a global adaptation of the back-propagation algorithm to the specific constraints of a spiking neural network. In particular, the invention makes it possible to use the same propagation infrastructure to propagate the data and to back-propagate the errors in the training phase. The invention proposes a generic implementation of a spiking neuron that is suitable for implementing any type of spiking neural network, in particular convolutional networks. | 2021-12-23 |
20210397969 | TRAINING AND/OR UTILIZING AN INTERACTION PREDICTION MODEL TO DETERMINE WHEN TO INTERACT, AND/OR PROMPT FOR INTERACTION, WITH AN APPLICATION ON THE BASIS OF AN ELECTRONIC COMMUNICATION - Training and/or utilizing an interaction prediction model to generate a predicted interaction value that indicates a likelihood of interaction with a corresponding application on the basis of an electronic communication. The application can be in addition to any electronic communication application that is utilized in formulating the electronic communication and/or that is utilized in rendering the electronic communication. The predicted interaction value can be generated based on processing, utilizing the interaction prediction model, of features of the electronic communication and/or of other features. The predicted interaction value can be utilized to determine whether to perform further action(s) that interact with, and/or enable efficient interaction with, the application on the basis of the electronic communication. | 2021-12-23 |
20210397970 | Artificial Intelligence System for Classification of Data Based on Contrastive Learning - An artificial intelligence (AI) system that includes a processor configured to execute modules of the AI system. The modules comprise a feature extractor, an adversarial noise generator, a compressor and a classifier. The feature extractor is trained to process input data to extract features of the input data for classification of the input data. The adversarial noise generator is trained to generate noise data for distribution of features of the input data such that a misclassification rate of corrupted features that include the extracted features corrupted with the generated noise data is greater than a misclassification rate of the extracted features. The compressor is configured to compress the extracted features. The compressed features are closer to the extracted features than to the corrupted features. The classifier is trained to classify the compressed features. | 2021-12-23 |
20210397971 | RECOMMENDATION GENERATION USING ONE OR MORE NEURAL NETWORKS - Apparatuses, systems, and techniques are presented to generate recommendations for players of a game. In at least one embodiment, one or more neural networks are used to generate one or more recommendations for one or more players of a game based, at least in part, upon one or more cumulative changes of state in the game. | 2021-12-23 |
20210397972 | SYSTEMS AND METHODS FOR EXPANDING DATA CLASSIFICATION USING SYNTHETIC DATA GENERATION IN MACHINE LEARNING MODELS - Systems and methods for classifying data are disclosed. For example, a system may include at least one memory storing instructions and at least one processor configured to execute the instructions to perform operations. The operations may include receiving training data comprising a class. The operations may include training a data classification model using the training data to generate a trained data classification model. The operations may include receiving additional data comprising labeled samples of an additional class not contained in the training data. The operations may include creating a synthetic data generator. The operations may include training the synthetic data generator to generate synthetic data corresponding to the additional class. The operations may include generating a synthetic classified dataset comprising the additional class. The operations may include retraining the trained data classification model using the synthetic classified dataset. | 2021-12-23 |
20210397973 | STORAGE MEDIUM, OPTIMUM SOLUTION ACQUISITION METHOD, AND OPTIMUM SOLUTION ACQUISITION APPARATUS - A non-transitory computer-readable storage medium storing a program that causes a computer to execute a process, the process includes learning a variational autoencoder (VAE) by using a plurality of pieces of training data including an objective function; identifying, by inputting the plurality of pieces of training data to the learned VAE, a distribution of the plurality of pieces of training data over a latent space of the learned VAE; determining a search range of an optimum solution of the objective function based on the distribution of the plurality of pieces of training data; and acquiring an optimum solution of a desired objective function by using the pieces of training data included in the search range. | 2021-12-23 |
20210397974 | MULTI-PRECISION DIGITAL COMPUTE-IN-MEMORY DEEP NEURAL NETWORK ENGINE FOR FLEXIBLE AND ENERGY EFFICIENT INFERENCING - Anon-volatile memory structure capable of storing weights for layers of a deep neural network (DNN) and perform an inferencing operation within the structure is presented. An in-array multiplication can be performed between multi-bit valued inputs, or activations, for a layer of the DNN and multi-bit valued weights of the layer. Each bit of a weight value is stored in a binary valued memory cell of the memory array and each bit of the input is applied as a binary input to a word line of the array for the multiplication of the input with the weight. To perform a multiply and accumulate operation, the results of the multiplications are accumulated by adders connected to sense amplifiers along the bit lines of the array. The adders can be configured to multiple levels of precision, so that the same structure can accommodate weights and activations of 8-bit, 4-bit, and 2-bit precision. | 2021-12-23 |
20210397975 | PERFORMING HYPERPARAMETER TUNING OF MODELS IN A MASSIVELY PARALLEL DATABASE SYSTEM - Hyperparameter tuning for a machine learning model is performed in a massively parallel database system. A computer system comprised of a plurality of compute units executes a relational database management system (RDBMS), wherein the RDBMS manages a relational database comprised of one or more tables storing data. One or more of the compute units perform the hyperparameter tuning for the machine learning model, wherein the hyperparameters are control parameters used in construction of the model, and the tuning of the hyperparameters is implemented as an operation in the RDBMS that accepts training and scoring data for the model, constructs the model using the hyperparameters and the training data, and generates goodness metrics for the model using the scoring data. | 2021-12-23 |
20210397976 | PREDICTION MANAGMENT SYSTEM, PREDICTION MANAGEMENT METHOD, DATA STRUCTURE, PREDICTION MANAGEMENT DEVICE AND PREDICTION EXECUTION DEVICE - A prediction management system that makes a prediction regarding a material, includes: a descriptor storage unit that stores descriptors each describing a parameter regarding processing means, a structure, a property, or a performance of the material; a prediction model storage unit that stores prediction models each describing an input and output relationship among the descriptors, wherein each prediction model accepts, as an input, one of at least two of the processing means, the structure, the property, and the performance, and outputs another one thereof; a workflow storage unit that stores workflows each describing that at least two of the prediction models are connected to each other via the descriptor, wherein an output of one prediction model is accepted as an input of another prediction model; an execution unit that gives an input to each of the workflows, executes the workflow, and stores an execution result including an output result of each prediction model included in the workflow; and a processing unit that manages the execution results, the workflows, the prediction models, and the descriptors in four hierarchical layers by assigning a unique identifier. | 2021-12-23 |
20210397977 | KNOWLEDGE MODEL CONSTRUCTION SYSTEM AND KNOWLEDGE MODEL CONSTRUCTION METHOD - The knowledge model construction system includes: a CAD data which stores a design information including information on configurations of the parts; an input unit which inputs an element knowledge model which includes a plurality of combinations, each of the combinations being a combination of an element knowledge and an establishment condition thereof, the element knowledge representing a causal relationship with respect to an asset; a knowledge model construction unit which extracts a combination of an element knowledge applicable to an object asset and an establishment condition thereof by comparing the CAD data of the object asset with the establishment conditions of the element knowledge model; and an object asset knowledge model which records the extracted combination of the element knowledge and the establishment condition thereof as a knowledge model. Thus, the knowledge model construction system is able to construct a knowledge model of a new asset with less man-hours. | 2021-12-23 |
20210397978 | APPARATUS AND METHOD FOR PROCESSING DATA DISCOVERING NEW DRUG CANDIDATE SUBSTANCE - A method for processing data for discovering a new drug candidate substance by a data processing apparatus, includes receiving a predetermined search word, extracting at least one biological entity related to the predetermined search word from a big data database (DB), extracting a degree of mutual association between the predetermined search word and the at least one biological entity, generating a first knowledge network in which a plurality of nodes including the predetermined search word and the at least one biological entity are connected according to the degree of mutual association, computing a graph theory index of the first knowledge network, and generating a second knowledge network using some nodes of the plurality of nodes of which the graph theory index is equal to or greater than a threshold value. | 2021-12-23 |
20210397979 | GENERATION OF DIGITAL STANDARDS USING MACHINE-LEARNING MODEL - One embodiment provides a method for generating a digital standard utilizing a trained machine-learning model, the method including: receiving an underlying standard; extracting conceptual units from the underlying standard; classifying, using at least one trained machine-learning model, at least a portion of the extracted conceptual units into one of a plurality of classification groups, wherein each of the classification groups identifies a function of the extracted conceptual units, included within a given classification group, within the underlying standard; wherein the classifying comprises classifying conceptual units from the underlying standard based upon sections of a schema corresponding to a digital standard; and storing the classified extracted conceptual units into a data repository based upon the schema. Other aspects are described and claimed. | 2021-12-23 |
20210397980 | INFORMATION RECOMMENDATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM - The present disclosure provides an information recommendation method, which relates to a field of knowledge graph. The method includes: acquiring request information; extracting a request entity word representing an entity from the request information; determining recommendation information based on the request entity word and a pre-constructed knowledge graph; and pushing the recommendation information, wherein the knowledge graph is constructed based on a text, and the knowledge graph indicates a first word representing a source of the text. The present disclosure further provides an information recommendation apparatus, an electronic device and a computer-readable storage medium. | 2021-12-23 |
20210397981 | SYSTEM AND METHOD OF SELECTION OF A MODEL TO DESCRIBE A USER - Disclosed herein are systems and methods for selection of a model to describe a user. In one aspect, an exemplary method comprises, creating data on preferences of the user based on previously gathered data on usage of a computing device by the user and a base model that describes the user, wherein the base model is previously selected from a database of models including a plurality of models, determining an accuracy of the data created on the preferences of the user, wherein the determination is based on observed behaviors of the user, when the accuracy of the data is determined as being less than a predetermined threshold value, selecting a correcting model related to the base model, and retraining the base model, and when the accuracy of the data is determined as being greater than or equal to the predetermined threshold value, selecting the base model to describe the user. | 2021-12-23 |
20210397982 | INTELLIGENT ANOMALY IDENTIFICATION AND ALERTING SYSTEM BASED ON SMART RANKING OF ANOMALIES - A method for ranking detected anomalies is disclosed. The method includes generating a graph based on a plurality of rules, wherein the graph comprises nodes representing metrics identified in the rules, edges connecting nodes where metrics associated with connected nodes are identified in a given rule, and edge weights of the edges each representing a severity level assigned to the given rule. The method further includes ranking nodes of the graph based on the edge weights. The method further includes ranking detected anomalies based on the ranking of the nodes corresponding to the metrics associated with the detected anomalies. | 2021-12-23 |
20210397983 | PREDICTING AN OUTCOME OF A USER JOURNEY - The present disclosure is directed to systems and methods for predicting an outcome of a user journey. For example, a method may include: monitoring interactions of a user with a plurality of touchpoints; predicting, based on a machine learning model, whether the interactions will result in a first outcome or a second outcome different than the first outcome, the machine learning being trained using a dataset based on historical interaction data, the dataset comprising a first plurality of patterns resulting in the first outcome and a second plurality of patterns resulting in the second outcome, wherein the predicting is based on a minimum number of interactions with the plurality of touchpoints; and providing an alternative interaction with the plurality of touchpoints to increase a probability that the interactions will result in the second outcome. | 2021-12-23 |
20210397984 | OPTIMIZING USER EXPERIENCES USING A FEEDBACK LOOP - The present disclosure is directed to systems and methods for predicting an outcome of a user journey. For example, a method may include: identifying a plurality of patterns based on a plurality of user interactions of a plurality of users with a plurality of touchpoints; applying a parameter to filter the plurality of patterns; evaluating the filtered plurality of patterns based an evaluation criterion; and applying a feedback loop based on the evaluation of the filtered patterns to modify the parameter or adjust a user experience. | 2021-12-23 |
20210397985 | PATTERN-LEVEL SENTIMENT PREDICTION - Based on the interaction data and response data, an interaction monitoring platform may determine a first known sentiment and a second known sentiment, identify a first pattern and a second pattern in the interaction data, and generate a first pattern-level sentiment and a second pattern-level sentiment based on the known sentiments and the identified patterns. A binary indicator may indicate which identified patterns are exhibited in a subset of the interaction data. The platform may train a gradient boosting model using known sentiment as a target variable and using binary indicators and pattern-level sentiments as input data. The platform may predict a sentiment corresponding to a subset of interaction data with unknown sentiment that exhibits one or more of the first pattern or the second pattern based on a binary indicator and the trained gradient boosting model. | 2021-12-23 |
20210397986 | FORM STRUCTURE EXTRACTION BY PREDICTING ASSOCIATIONS - Techniques described herein extract form structures from a static form to facilitate making that static form reflowable. A method described herein includes accessing low-level form elements extracted from a static form. The method includes determining, using a first set of prediction models, second-level form elements based on the low-level form elements. Each second-level form element includes a respective one or more low-level form elements. The method further includes determining, using a second set of prediction models, high-level form elements based on the second-level form elements and the low-level form elements. Each high-level form element includes a respective one or more second-level form elements or low-level form elements. The method further includes generating a reflowable form based on the static form by, for each high-level form element, linking together the respective one or more second-level form elements or low-level form elements. | 2021-12-23 |
20210397987 | ADVANCES IN DATA PROVISIONING INCLUDING BULK PROVISIONING TO AID MANAGEMENT OF DOMAIN-SPECIFIC DATA VIA SOFTWARE DATA PLATFORM - The present disclosure relates to processing operations configured to improve data provisioning for management of access to and usage of domain-specific data through a software data platform. Processing described herein provides technical advantages, provided through a software data platform, that enable a user (e.g., an administrator) of an organization to more easily integrate and manage domain-specific data within a software data platform. For instance, a graphical user interface (GUI) of a software data platform is configured to provide a user (e.g., administrative user) with control over provisioning of their data (e.g., education data) including bulk provisioning options to manage utilization and sharing of data via a software data platform. Provisioning management may comprise control over sharing permissions of domain-specific data with vendors (e.g., ISVs integrating within a software data platform), user accounts associated with a tenant configuration and applications/services provided by the software data platform. | 2021-12-23 |
20210397988 | DEPTH-CONSTRAINED KNOWLEDGE DISTILLATION FOR INFERENCE ON ENCRYPTED DATA - This disclosure provides a method, apparatus and computer program product to create a full homomorphic encryption (FHE)-friendly machine learning model. The approach herein leverages a knowledge distillation framework wherein the FHE-friendly (student) ML model closely mimics the predictions of a more complex (teacher) model, wherein the teacher model is one that, relative to the student model, is more complex and that is pre-trained on large datasets. In the approach herein, the distillation framework uses the more complex teacher model to facilitate training of the FHE-friendly model, but using synthetically-generated training data in lieu of the original datasets used to train the teacher. | 2021-12-23 |
20210397989 | ARTIFICIAL INTELLIGENCE BASED SYSTEM FOR PATTERN RECOGNITION IN A SEMI-SECURE ENVIRONMENT - Embodiments of the present invention provide a system monitoring and identifying real-time indicators and patterns to predict occurrence of real-time events. The system is configured for continuously gathering and monitoring real-time data from one or more monitoring devices associated with the entity, identifying that at least one user device is within a predetermined distance and automatically transmit a prompt to the user device to connect to a beacon, identifying that at least one user associated with the at least one user device has connected to the beacon, continuously monitoring the at least one user device in real-time, identifying occurrence of a potential real-time event based on continuously monitoring the at least one user device and the real-time data from the one or more monitoring devices, via an artificial intelligence engine. | 2021-12-23 |
20210397990 | Predictability-Driven Compression of Training Data Sets - Techniques for performing predictability-driven compression of training data sets used for machine learning (ML) are provided. In one set of embodiments, a computer system can receive a training data set comprising a plurality of data instances and can train an ML model using the plurality of data instances, the training resulting in a trained version of the ML model. The computer system can further generate prediction metadata for each data instance in the plurality of data instances using the trained version of the ML model and can compute a predictability measure for each data instance based on the prediction metadata, the predictability measure indicating a training value of the data instance. The computer system can then filter one or more data instances from the plurality of data instances based on the computed predictability measures, the filtering resulting in a compressed version of the training data set. | 2021-12-23 |
20210397991 | PREDICTIVELY SETTING INFORMATION HANDLING SYSTEM (IHS) PARAMETERS USING LEARNED REMOTE MEETING ATTRIBUTES - Systems and methods for predictively setting Information Handling System (IHS) parameters using learned remote meeting attributes are described. In some embodiments, an Information Handling System (IHS) may include: a processor and a memory coupled to the processor, the memory having program instructions stored thereon that, upon execution by the processor, cause the IHS to: determine, based upon context information collected by the IHS, that a user of the IHS is likely to serve as a host of a remote meeting; and in response to the determination, apply one or more settings to the IHS. | 2021-12-23 |
20210397992 | INFERENCE APPARATUS, INFORMATION PROCESSING APPARATUS, INFERENCE METHOD, PROGRAM AND RECORDING MEDIUM - An inference apparatus makes an inference with respect to a phenomenon, the inference apparatus includes: a question acquirer configured to acquire a question related to the phenomenon; a question classifier configured to classify whether the question is a qualitative question or a quantitative question; a sensor classifier configured to classify whether sensor data is acquirable or not when the question is the quantitative question; a determiner configured to determine the sensor data as data to be used for the inference when the sensor data is acquirable and configured to determine input data by a user as the data to be used for the inference when the sensor data is unacquirable; and an inferrer configured to make the inference corresponding to the phenomenon using the data determined by the determiner. | 2021-12-23 |
20210397993 | GENERALIZED MACHINE LEARNING APPLICATION TO ESTIMATE WHOLESALE REFINED PRODUCT PRICE SEMI-ELASTICITIES - Certain aspects of the present disclosure provide techniques for combining multiple machine learning applications in order to train a model of a decision support system to determine an optimal semi-elasticity or elasticity coefficient for a commodity in a highly competitive market structure (e.g., unbranded, wholesale fuels market). Data is obtained from sources and clustered using a plurality of clustering combinations. Once data clusters are generated, the relevant features from each cluster is identified. A correlation coefficient range is established, and for each cluster at each iteration of the correlation coefficient range, a set of regressions are implemented and statistical tests conducted in order to determine an optimal coefficient for each cluster. The set of regressions is also implemented on the selected optimal correlation coefficient and the correlation coefficient and corresponding metric is recorded, from which one correlation coefficient is distributed to a computing device associated with the decision support system. | 2021-12-23 |
20210397994 | EVENT MODEL TRAINING USING IN SITU DATA - A method of identifying events within a wellbore comprises obtaining a first set of measurements of a first signal within a wellbore, identifying one or more events within the wellbore using the first set of measurements, obtaining a second set of measurements of a second signal within the wellbore, wherein the first signal and the second signal represent different physical measurements, training one or more event models using the second set of measurements and the identification of the one or more events as inputs, and using the one or more event models to identify at least one additional event within the wellbore. | 2021-12-23 |
20210397995 | SYSTEMS AND METHODS RELATING TO NETWORK-BASED BIOMARKER SIGNATURES - Systems and methods are provided herein for generating a classifier for phenotypic prediction. A computational causal network model representing a biological system includes a plurality of nodes and a plurality of edges connecting pairs of nodes. A first set of data corresponding to activities of a first subset of biological entities obtained under a first set of conditions is received, and a second set of data corresponding to activities of the first subset of biological entities obtained under a second set of conditions is received. A set of activity measures representing a difference between the first and second sets of data for a first subset of nodes is calculated. A set of activity values for a second subset of nodes, which are unmeasured, is generated. A classifier is generated for the phenotypes based on the set of activity measures, the set of activity values, or both. | 2021-12-23 |
20210397996 | METHODS AND SYSTEMS FOR CLASSIFICATION USING EXPERT DATA - A system for classification using expert data includes at least a processor includes an expert submission processing module operating on the at least a processor configured to receive at least an expert submission relating constitutional data to ameliorative recommendation data, a model generator operating on the at least a processor configured to convert the at least an expert submission into training data, and an expert learner operating on the at least a processor configured to generate, using a machine learning process, a plurality of ameliorative outputs as a function of the training data and a constitutional inquiry, receive a significant category, calculate a plurality of significance scores as a function of each of the plurality of ameliorative outputs and the significant category, and rank each of the plurality of ameliorative outputs as a function of each of the plurality of significance scores. | 2021-12-23 |
20210397997 | Partioning Sensor Based Data to Generate Driving Pattern Map - Telematics and external data relating to the real-time driving of a population of drivers vehicle may be collected and used to calculate a driving pattern map. The driving pattern map is used to determine a driving quotient for individual drivers wherein the driving quotient is a relative score. The driving quotient may be displayed to the driver. | 2021-12-23 |
20210397998 | SYSTEM TO ENSURE SAFE ARTIFICIAL GENERAL INTELLIGENCE VIA DISTRIBUTED LEDGER TECHNOLOGY - in an artificial general intelligence system that is safe for humans, distributed ledger technology (DLT, ‘blockchain’) is integral to the methods to reduce the probability of hacking, provide an audit trail to cheaply detect and correct errors or identify components causing vulnerability or failure and replace them or shut them down remotely and/or automatically. Smart contracts based on DLT are necessary to address evolution of AI that will be too fast for human monitoring and intervention. Proposed methods of a safe AGI system: 1) Access to technology by market license. 2) Transparent ethics embodied in DLT. 3) Morality encrypted via DLT. 4) Behavior control structure with values (ethics) at roots. 5) Individual bar-code identification of all critical components. 6) Configuration Item (from business continuity/disaster recovery planning). 7) Identity verification secured via multi-factor authentication and DLT. 8) ‘Smart’ automated contracts based on DLT. 9) Decentralized applications—AI software code modules encrypted via DLT. 10) Audit trail of component usage stored via DLT. 11) Social ostracism (denial of societal resources) augmented by DLT petitions. | 2021-12-23 |
20210397999 | METHODS AND APPARATUS TO OFFLOAD EXECUTION OF A PORTION OF A MACHINE LEARNING MODEL - Methods, apparatus, systems and articles of manufacture to offload execution of a portion of a machine learning model are disclosed. An example apparatus includes processor circuitry to instantiate offload controller circuitry to select a first portion of layers of the machine learning model for execution at a first node and a second portion of the layers for remote execution for execution at a second node, model executor circuitry to execute the first portion of the layers, serialization circuitry to serialize the output of the execution of the first portion of the layers, and a network interface to transmit a request for execution of the machine learning model to the second node, the request including the serialized output of the execution of the first portion of the layers of the machine learning model and a layer identifier identifying the second portion of the layers of the machine learning model. | 2021-12-23 |
20210398000 | EXPLAINING RESULTS PROVIDED BY AUTOMATED DECISIONS SYSTEMS - In general, the disclosure describes various aspects of techniques for explaining results provided by automated decision systems. A device comprising a memory and a computation engine executing one or more processor may be configured to perform the techniques. The memory may store an automated reasoning engine. The computation engine may execute the automated reasoning engine to obtain a query, obtain, from a knowledge base, and responsive to the query, a knowledge base entity representative of an explicit fact or a rule, and determine, based on the knowledge base entity, the query result that provides a decision to the query. The automated reasoning engine may also obtain provenance information that explains a history for the knowledge base entity, determine, based on the provenance information, an explanation that explains a difference between the query result and a previous query result provided with respect to the query, and output the explanation. | 2021-12-23 |
20210398001 | CYBERSECURITY INCIDENT RESPONSE AND SECURITY OPERATION SYSTEM EMPLOYING PLAYBOOK GENERATION AND PARENT MATCHING THROUGH CUSTOM MACHINE LEARNING - A cybersecurity incident is registered at a security incident response platform. At a playbook generation system, details are received of the cybersecurity incident from the security incident response platform. At least some of the details correspond to a set of features of the cybersecurity incident. A set or subset of nearest neighbors of the cybersecurity incident is localized in a feature space. The nearest neighbors of the cybersecurity incident are other cybersecurity incidents having a distance from the cybersecurity incident within the feature space that is defined by differences in features of the nearest neighbors with respect to the set of features of the cybersecurity incident. A playbook is created for responding to the cybersecurity incident having prescriptive procedures based on occurrences of prescriptive procedures previously employed in response to the nearest neighbor cybersecurity incidents. The differences in features of the nearest neighbors with respect to the set of features of the cybersecurity incident are calculated, for at least one feature, using a present-or-equal metric, and for at least one other feature, using a symmetric difference metric. The playbook generation system is also a parent recommendation system, which identifies a parent for the cybersecurity incident, based on distances of the nearest neighbors of the cybersecurity incident in the feature space. The parent recommendation system adjusts, based on the recommended parent or the parent other than the recommended parent being selected, weights of features upon which distances in the feature space are based. | 2021-12-23 |
20210398002 | PARALLEL PROXY MODEL BASED MACHINE LEARNING METHOD FOR OIL RESERVOIR PRODUCTION - The present disclosure relates to a parallel proxy model based machine learning method for oil reservoir production. With the proposed method, multiple optimized candidate solutions can be obtained within an iteration, and then a matrix laboratory (e.g., MATLAB) is used to call numerical simulation software Eclipse in parallel to conduct actual evaluation on the candidate solutions simultaneously, so that optimization time of complex problems can be greatly reduced. With the method of the present disclosure, the efficiency of solving an oilfield production optimization problem can be speeded up to a greater extent than in the art, and the final optimization effect can be improved. Moreover, the method of the present disclosure may further be used for well pattern optimization, history matching, and so on, apart from adjusting schedules of the producers and injectors in the oilfield. | 2021-12-23 |
20210398003 | METHOD AND SYSTEM FOR FORECASTING A FAILURE OF A VENTILATOR GROUP, AND CORRESPONDING VENTILATOR GROUP - It is a method for predicting a failure of a fan group with N fans, of which n fans are redundant, wherein 12021-12-23 | |
20210398004 | METHOD AND APPARATUS FOR ONLINE BAYESIAN FEW-SHOT LEARNING - Provided are a method and apparatus for online Bayesian few-shot learning. The present invention provides a method and apparatus for online Bayesian few-shot learning in which multi-domain-based online learning and few-shot learning are integrated when domains of tasks having data are sequentially given. | 2021-12-23 |
20210398005 | SYSTEM FOR IMPROVING USER SENTIMENT DETERMINATION FROM SOCIAL MEDIA AND WEB SITE USAGE DATA - A method and system for improving analysis of social media and other usage data to determine user sentiments are disclosed. Social media posts are identified as relevant to determining user sentiments regarding a service provider. Posts are analyzed by machine learning algorithms to determine user general sentiments and specific sentiments. User interaction metrics indicating user interaction with service provider web site or application may also be analyzed. Sentiment and interaction determinations may be used with other data to predict likelihood of user attrition for services of the service provider. Sentiment determinations associated with social media posts may further be used to determine priority levels for the posts, including response urgency levels. Determined priority levels may then be used to implement appropriate actions in a timely manner based upon the post urgency. | 2021-12-23 |
20210398006 | SYSTEM AND METHOD FOR SEMANTICS BASED PROBABILISTIC FAULT DIAGNOSIS - A system for generating a statistical model for fault diagnosis comprising at least one hardware processor, adapted to: extract a plurality of structured values, each associated with at least one of a plurality of semantic entities of a semantic model or at least one of a plurality of semantic relationships of the semantic model, from structured historical information organized in an identified structure and related to at least some of a plurality of historical events, the semantic model represents an ontology of an identified diagnosis domain, each of the plurality of semantic entities relates to at least one of a plurality of domain entities existing in the identified diagnosis domain, and each of the plurality of semantic relationships connects two of the plurality of semantic entities and represents a parent-child relationship therebetween; extract a plurality of unstructured values, each associated with at least one of the plurality of semantic entities. | 2021-12-23 |
20210398007 | QUANTUM PROCESSING SYSTEM - A method, apparatus, system, and computer program product for quantum processing. A target quantum programming for a process for a quantum computer is identified. A universal gate set is selected based on a computer type. Any operation possible for a particular quantum computer can be performed using the universal gate set. Instructions for the process in a source quantum programming language are sent to a source quantum language translator which outputs a digital model representation of quantum computer components that are arranged to perform the process using the instructions. The digital model representation of the quantum computer components and the universal gate set are sent to a target quantum language translator, which outputs the instructions for operations for the process in a target quantum programming language using the digital model representation of the quantum computer components and the universal gate set for the computer type for the quantum computer. | 2021-12-23 |
20210398008 | PERFORMING QUANTUM FILE PATTERN SEARCHING - Performing quantum file pattern searching is disclosed herein. In one example, a quantum search service executing on a quantum computing device receives, from a requestor, a search request including a search pattern. Upon receiving the search request, the quantum search service accesses a quantum file registry of a quantum file that includes a plurality of qubits. Based on the quantum file registry record, the quantum search service identifies the plurality of qubits, as well as the locations of each qubit of the plurality of qubits. The quantum search service then accesses a plurality of data values stored by the plurality of qubits, and compares the data values to the search pattern. If the quantum search service determines that one or more data values of the plurality of data values correspond to the search pattern, the quantum search service sends to the requestor a search response indicating a match. | 2021-12-23 |
20210398009 | All-to-All Coupled Quantum Computing with Atomic Ensembles in an Optical Cavity - A quantum computer uses interactions between atomic ensembles mediated by an optical cavity mode to perform quantum computations and simulations. Using the cavity mode as a bus enables all-to-all coupling and execution of non-local gates between any pair of qubits. Encoding logical qubits as collective excitations in ensembles of atoms enhances the coupling to the cavity mode and reduces the experimental difficulty of initial trap loading. By using dark-state transfers via the cavity mode to enact gates between pairs of qubits, the gates become insensitive to the number of atoms within each collective excitation, making it possible to prepare an array of qubits through Poissonian loading without feedback. | 2021-12-23 |
20210398010 | SYSTEM AND METHOD FOR DETERMINING A PERTURBATION ENERGY OF A QUANTUM STATE OF A MANY-BODY SYSTEM - A method for determining a perturbation energy of a quantum state of a many-body system includes constructing a wave function that approximates the quantum state by adjusting parameters of the wave function to minimize an expectation value of a zeroth-order Hamiltonian. The zeroth-order Hamiltonian explicitly depends on a finite mass of each of a plurality of interacting quantum particles that form the many-body system, the quantum state has a non-zero total angular momentum, the wave function is a linear combination of explicitly correlated Gaussian basis functions, and each of the explicitly correlated Gaussian basis functions includes a preexponential angular factor. The perturbation energy is calculated from the wave function and a perturbation Hamiltonian that explicitly depends on the finite mass of each of the plurality of interacting quantum particles. The perturbation energy may be added to the minimized expectation value to obtain a total energy of the quantum state. | 2021-12-23 |
20210398011 | INTERACTIVE AND DYNAMIC MAPPING ENGINE (IDME) - Methods, systems, and apparatuses, among other things, may provide for an interactive dynamic mapping engine (iDME) for business intelligence, which may interactively obtain information for user devices from sources and schema unknown to the user devices. | 2021-12-23 |
20210398012 | METHOD AND SYSTEM FOR PERFORMING DATA PRE-PROCESSING OPERATIONS DURING DATA PREPARATION OF A MACHINE LEARNING LIFECYCLE - In accordance with an embodiment of the invention, a method is provided for performing data pre-processing operations during data preparation of a machine learning lifecycle. The method includes defining one or more data pre-processing functions for applying to data stored in a dataset, executing one or more learn functions for learning the data, and executing one or more transform functions for transforming the data. Each of the one or more learn functions generates a first Structured Query Language (SQL) statement representing a definition of corresponding learn function for corresponding defined data pre-processing function. Each of the one or more transform functions generates a second SQL statement representing a definition of corresponding transform function for corresponding defined data pre-processing function. The dataset is stored in a database. | 2021-12-23 |
20210398013 | METHOD AND SYSTEM FOR PERFORMANCE TUNING AND PERFORMANCE TUNING DEVICE - A method for performance tuning in Automated Machine Learning (Auto ML) includes obtaining preset application program interface and system resources of the automatic machine learning system. Performance index measurement values are obtained according to the preset application program interface when the system pre-trains deep learning training model candidates. A distribution strategy and a resource allocation strategy are determined according to the performance index measurement values and the system resources and computing resources of the system are allocated according to the distribution strategy and the resource allocation strategy. The disclosure also provides an electronic device and a non-transitory storage medium. | 2021-12-23 |
20210398014 | REINFORCEMENT LEARNING BASED CONTROL OF IMITATIVE POLICIES FOR AUTONOMOUS DRIVING - A method for controlling an ego agent includes periodically receiving policy information comprising a spatial environment observation and a current state of the ego agent. The method also includes selecting, for each received policy information, a low-level policy from a number of low-level policies. The low-level policy may be selected based on a high-level policy. The method further includes controlling an action of the ego agent based on the selected low-level policy. | 2021-12-23 |
20210398015 | MACHINE LEARNING MODEL COMPILER - The subject technology provides a framework for executable machine learning models that are executable in a zero-runtime operating environment. This allows the machine learning models to be deployed in limited memory environments such as embedded domains. A machine learning compiler is provided to generate the executable machine learning models. | 2021-12-23 |