15th week of 2021 patent applcation highlights part 44 |
Patent application number | Title | Published |
20210110235 | ACCELERATING SPARSE MATRIX MULTIPLICATION IN STORAGE CLASS MEMORY-BASED CONVOLUTIONAL NEURAL NETWORK INFERENCE - Techniques are presented for accelerating in-memory matrix multiplication operations for a convolution neural network (CNN) inference in which the weights of a filter are stored in the memory of a storage class memory device, such as a ReRAM or phase change memory based device. To improve performance for inference operations when filters exhibit sparsity, a zero column index and a zero row index are introduced to account for columns and rows having all zero weight values. These indices can be saved in a register on the memory device and when performing a column/row oriented matrix multiplication, if the zero row/column index indicates that the column/row contains all zero weights, the access of the corresponding bit/word line is skipped as the result will be zero regardless of the input. | 2021-04-15 |
20210110236 | INFERENTIAL DEVICE, CONVOLUTIONAL OPERATION METHOD, AND PROGRAM - An inferential device, including a quantization part that quantizes a result of a convolutional operation in a convolutional neural network using input data and weights; a convolutional operation part that performs a convolutional operation using the quantized operation result as input data; and an input data conversion part that converts the input data to a first layer to enable the convolutional operation part to process both the input data to the first layer and the input data that is quantized by the quantization part in a same way. | 2021-04-15 |
20210110237 | Computer Operations and Architecture for Artificial Intelligence Networks and Wave Form Transistor - The nodes of an artificial intelligence neural network may be wave form transistors which signal one another using functions as well as real numbers. The nodes then perform a wide variety of functions, on the functions which they have received as input and then output results (functions) which become the signal to the next nodes in the net. The system utilizes multi-dimensional multi-variable functions. In addition to using functions as signals, the present invention teaches that the edges (connections) themselves may have function inputs influencing them, such that the function which is put into a connection (dendrite, synapse, edge, etc) may be altered during transmission in a way beyond merely being weighted or run through a function in the connection. The electronic version of the wave form transistor features multiple input leads which are under the influence of electro-magnets. | 2021-04-15 |
20210110238 | APPARATUS AND METHOD FOR DETECTING BROADCAST SIGNAL - Disclosed herein are an apparatus and method for detecting a broadcast signal. The apparatus includes a bootstrap detection unit for detecting whether a bootstrap signal is included in a received broadcast signal based on a preset first bootstrap window and a machine-learning method, a bootstrap offset estimation unit for searching the broadcast signal for the start point of the bootstrap signal using a preset second bootstrap window, estimating a bootstrap offset based on the machine-learning method, and estimating bootstrap symbols from the bootstrap offset, and a demodulation unit for demodulating information included in the broadcast signal from the bootstrap symbols based on the machine-learning method. | 2021-04-15 |
20210110239 | CIRCUIT AND A METHOD FOR DETERMINING AN ATTITUDE OF A MAGNET, AND JOYSTICK - An exemplary embodiment of a circuit for determining information about the position, attitude, or orientation of a magnet comprises an input interface configured to receive components of a magnetic field produced by the magnet. An evaluation logic unit corresponds to at least one trained neural network and is configured to determine the information about the position, attitude, or orientation of the magnet on the basis of the received components. | 2021-04-15 |
20210110240 | NEURAL NETWORK FOR CHEMICAL COMPOUNDS - A computer implemented method for training a neural network to capture a structural feature specific to a set of chemical compounds is disclosed. In the method, the computer system reads an expression describing a structure of the chemical compound for each chemical compound in the set and enumerates one or more combinations of a position and a type of a structural element appearing in the expression for each chemical compound in the set. The computer system also generates training data based on the one or more enumerated combinations for each chemical compound in the set. The training data includes one or more values with a length, each of which indicates whether or not a corresponding type of the structural element appears at a corresponding position for each combination. Furthermore, the computer system trains the neural network based on the training data for the set of the chemical compounds. | 2021-04-15 |
20210110241 | NEURAL NETWORKS FOR DECODING - Methods and apparatus for training a Neural Network to recover a codeword of a Forward Error Correction code are provided. Trainable parameters of the Neural Network are optimised to minimise a loss function. The loss function is calculated by representing an estimated value of the message bit output from the Neural Network as a probability of the value of the bit in a predetermined real number domain and multiplying the representation of the estimated value of the message bit by a representation of a target value of the message bit. Training a neural network may be implemented via a loss function. | 2021-04-15 |
20210110242 | METHOD AND DEVICE FOR COMPLETING SOCIAL NETWORK USING ARTIFICIAL NEURAL NETWORK - A method and device for completing a social network using an artificial neural network are disclosed. The disclosed device includes: a neural network unit configured to receive a target network having unrevealed missing nodes as input, infer the connections of the missing nodes with a neural network, and output multiple candidate complete networks according to various node sequences; and a selection unit configured to select one of the candidate complete networks outputted by the neural network unit, where the neural network unit outputs the candidate complete networks by using weights of a graph-generating neural network that has learned graph structures of reference networks having attributes similar to those of the target network, and the selection unit uses connection probability vectors obtained from the learned graph-generating neural network to select the candidate complete network probabilistically having a structure closest to that of the target network based on the connection probability vectors. | 2021-04-15 |
20210110243 | DEEP LEARNING ACCELERATOR SYSTEM INTERFACE - Systems are methods are provided for implementing a deep learning accelerator system interface (DLASI). The DLASI connects an accelerator having a plurality of inference computation units to a memory of the host computer system during an inference operation. The DLASI allows interoperability between a main memory of a host computer, which uses 64 B cache lines, for example, and inference computation units, such as tiles, which are designed with smaller on-die memory using 16-bit words. The DLASI can include several components that function collectively to provide the interface between the server memory and a plurality of tiles. For example, the DLASI can include: a switch connected to the plurality of tiles; a host interface; a bridge connected to the switch and the host interface; and a deep learning accelerator fabric protocol. The fabric protocol can also implement a pipelining scheme which optimizes throughput of the multiple tiles of the accelerator. | 2021-04-15 |
20210110244 | REALIZATION OF NEURAL NETWORKS WITH TERNARY INPUTS AND TERNARY WEIGHTS IN NAND MEMORY ARRAYS - Use of a NAND array architecture to realize a binary neural network (BNN) allows for matrix multiplication and accumulation to be performed within the memory array. A unit synapse for storing a weight of a BNN is stored in a pair of series connected memory cells. A binary input is applied on a pair of word lines connected to the unit synapse to perform the multiplication of the input with the weight. The results of such multiplications are determined by a sense amplifier, with the results accumulated by a counter. The arrangement extends to ternary inputs to realize a ternary-binary network (TBN) by adding a circuit to detect 0 input values and adjust the accumulated count accordingly. The arrangement further extends to a ternary-ternary network (TTN) by allowing 0 weight values in a unit synapse, maintaining the number of 0 weights in a register, and adjusting the count accordingly. | 2021-04-15 |
20210110245 | MULTI-MODE LOW-PRECISION INNER-PRODUCT COMPUTATION CIRCUITS FOR MASSIVELY PARALLEL NEURAL INFERENCE ENGINE - Neural inference chips for computing neural activations are provided. In various embodiments, the neural inference chip is adapted to: receive an input activation tensor comprising a plurality of input activations; receive a weight tensor comprising a plurality of weights; Booth recode each of the plurality of weights into a plurality of Booth-coded weights, each Booth coded value having an order; multiply the input activation tensor by the Booth coded weights, yielding a plurality of results for each input activation, each of the plurality of results corresponding to the orders of the Booth-coded weights; for each order of the Booth-coded weights, sum the corresponding results, yielding a plurality of partial sums, one for each order; and compute a neural activation from a sum of the plurality of partial sums. | 2021-04-15 |
20210110246 | PROGRESSIVE MODELING OF OPTICAL SENSOR DATA TRANSFORMATION NEURAL NETWORKS FOR DOWNHOLE FLUID ANALYSIS - Disclosed herein are examples embodiments of a progressive modeling scheme to enhance optical sensor transformation networks using both in-field sensor measurements and simulation data. In one aspect, a method includes receiving optical sensor measurements generated by one or more downhole optical sensors in a wellbore; determining synthetic data for fluid characterization using an adaptive model and the optical sensor measurements; and applying the synthetic data to determine one or more physical properties of a fluid in the wellbore for which the optical sensor measurements are received. | 2021-04-15 |
20210110247 | HYBRID DATA-MODEL PARALLELISM FOR EFFICIENT DEEP LEARNING - The embodiments herein describe hybrid parallelism techniques where a mix of data and model parallelism techniques are used to split the workload of a layer across an array of processors. When configuring the array, the bandwidth of the processors in one direction may be greater than the bandwidth in the other direction. Each layer is characterized according to whether they are more feature heavy or weight heavy. Depending on this characterization, the workload of an NN layer can be assigned to the array using a hybrid parallelism technique rather than using solely the data parallelism technique or solely the model parallelism technique. For example, if an NN layer is more weight heavy than feature heavy, data parallelism is used in the direction with the greater bandwidth (to minimize the negative impact of weight reduction) while model parallelism is used in the direction with the smaller bandwidth. | 2021-04-15 |
20210110248 | IDENTIFYING AND OPTIMIZING SKILL SCARCITY MACHINE LEARNING ALGORITHMS - A data set including at least skills data, recruiting data, compensation data, organization structure data can be received. A first training set can be created by cleaning and integrating the data set. A first machine learning model can be trained to predict skill scarcity associated with a skill, geography and organization using the first training set. A second training set can be created by selecting a subset of the first training set based on a local subject matter expert's input with respect to the trained first machine learning model's performance. The first machine learning model can be refined by retraining the first machine learning model using the second training set, the machine learning model refined to predict the skill scarcity associated with the skill, geography and organization within a locality associated with the local subject matter expert. | 2021-04-15 |
20210110249 | MEMORY COMPONENT WITH INTERNAL LOGIC TO PERFORM A MACHINE LEARNING OPERATION - A memory component includes a first region of memory cells to store a machine learning model and a second region of the memory cells to store input data and output data of a machine learning operation. The memory component can further include in-memory logic coupled to the first region of the memory cells and the second region of the memory cells via one more internal buses to perform the machine learning operation by applying the machine learning model to the input data to generate the output data. | 2021-04-15 |
20210110250 | MEMORY COMPONENT WITH A BUS TO TRANSMIT DATA FOR A MACHINE LEARNING OPERATION AND ANOTHER BUS TO TRANSMIT HOST DATA - Memory cells can include a memory region to store a machine learning model and input data and another memory region to store host data from a host system. An in-memory logic can be coupled to the plurality of memory cells and can perform a machine learning operation by applying the machine learning model to the input data to generate an output data. A bus can receive additional host data from the host system and can provide the additional host data to the memory component for the other memory region of the plurality of memory cells. An additional bus can receive machine learning data from the host system and can provide the machine learning data to the memory component for the in-memory logic that is to perform the machine learning operation. | 2021-04-15 |
20210110251 | MEMORY SUB-SYSTEM WITH INTERNAL LOGIC TO PERFORM A MACHINE LEARNING OPERATION - A memory component can include memory cells where a first region of the memory cells is to store a machine learning model and a second region of the memory cells is to store input data and output data of a machine learning operation. A controller can be coupled to the memory component with one more internal buses to perform the machine learning operation by applying the machine learning model to the input data to generate the output data. | 2021-04-15 |
20210110252 | MEMORY SUB-SYSTEM WITH A BUS TO TRANSMIT DATA FOR A MACHINE LEARNING OPERATION AND ANOTHER BUS TO TRANSMIT HOST DATA - A system includes a memory component to store host data from a host system and to store a machine learning model and input data. A controller includes an in-memory logic to perform a machine learning operation by applying the machine learning model to the input data to generate an output data. A bus can receive additional host data from the host system and provide the additional host data to the memory component. An additional bus can receive machine learning data from the host system and provide the machine learning data to the in-memory logic that is to perform the machine learning operation. | 2021-04-15 |
20210110253 | TRUSTWORTHY PREDICTIONS USING DEEP NEURAL NETWORKS BASED ON ADVERSARIAL CALIBRATION - The disclosed relates to a computer-implemented method of training a Neural Network as well as a corresponding computer program, computer-readable medium and data processing system. In addition to a categorical cross-entropy loss L | 2021-04-15 |
20210110254 | METHOD AND SYSTEM FOR TRAINING NEURAL SEQUENCE-TO-SEQUENCE MODELS BY INCORPORATING GLOBAL FEATURES - Methods for training a neural sequence-to-sequence (seq2seq) model. A processor receives the model and training data comprising a plurality of training source sequences and corresponding training target sequences, and generates corresponding predicted target sequences. Model parameters are updated based on a comparison of predicted target sequences to training target sequences to reduce or minimize both a local loss in the predicted target sequences and an expected loss of one or more global or semantic features or constraints between the predicted target sequences and the training target sequences given the training source sequences. Expected loss is based on global or semantic features or constraints of general target sequences given general source sequences. | 2021-04-15 |
20210110255 | GENERATING ATTRIBUTE-BASED SAMPLES - A computer-implemented method according to one aspect includes training a latent variable model (LVM), utilizing labeled data and unlabeled data within a data set; training a classifier, utilizing the labeled data and associated labels within the data set; and generating new data having a predetermined set of labels, utilizing the trained LVM and the trained classifier. | 2021-04-15 |
20210110256 | ARTIFICIAL INTELLIGENCE LAYER-BASED PROCESS EXTRACTION FOR ROBOTIC PROCESS AUTOMATION - Artificial intelligence (AI) layer-based process extraction for robotic process automation (RPA) is disclosed. Data collected by RPA robots and/or other sources may be analyzed to identify patterns that can be used to suggest or automatically generate RPA workflows. These AI layers may be used to recognize patterns of user or business system processes contained therein. Each AI layer may “sense” different characteristics in the data and be used individually or in concert with other AI layers to suggest RPA workflows. | 2021-04-15 |
20210110257 | METHOD AND APPARATUS FOR CONTROLLING MASSAGE CHAIR - A method and apparatus for controlling a massage chair have been disclosed. The method for controlling a massage chair comprises extracting multimedia information, detecting an action item, and controlling an action item. According to the present disclosure, the timing of the multimedia effect and the timing of the action item may be synchronized based on analysis using a deep neural network model through a | 2021-04-15 |
20210110258 | METHOD AND APPARATUS WITH MODEL TRAINING AND/OR SEQUENCE RECOGNITION - A processor-implemented method includes: using an encoder, determining, for each of a plurality of tokens included in an input sequence, a self-attention weight based on a token and one or more tokens that precede the token in the input sequence; using the encoder, determining context information corresponding to the input sequence based on the determined self-attention weights; and using a decoder, determining an output sequence corresponding to the input sequence based on the determined context information. | 2021-04-15 |
20210110259 | METHOD AND APPARATUS FOR DETERMINING OUTPUT TOKEN - A method for determining an output token includes predicting a first probability of each of candidate output tokens of a first model, predicting a second probability of each of the candidate output tokens of a second model interworking with the first model, adjusting the second probability of each of the candidate output tokens based on the first probability, and determining the output token among the candidate output tokens based on the first probability and the adjusted second probability. | 2021-04-15 |
20210110260 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - To reduce processing load of an operation and to realize learning with higher accuracy. There is provided an information processing device including a learning unit that optimizes parameters that determine a dynamic range by an error back propagation and a stochastic gradient descent in a quantization function of a neural network in which the parameters that determine the dynamic range are arguments. There is provided an information processing method, by a processor, including optimizing parameters that determine a dynamic range by an error back propagation and a stochastic gradient descent in a quantization function of a neural network in which the parameters that determine the dynamic range are arguments. | 2021-04-15 |
20210110261 | METHOD AND APPARATUS FOR TRANSCEIVING SIGNAL USING ARTIFICIAL INTELLIGENCE IN WIRELESS COMMUNICATION SYSTEM - The present disclosure relates to a 5G communication system or a 6G communication system for supporting higher data rates beyond a 4G communication system such as long term evolution (LTE). A method of transmitting or receiving a signal by a user equipment (UE) in a mobile communication system is provided. The method may include: identifying a neural network model for transmitting first information to a base station (BS); learning a connection weight of the neural network model using the first information; transmitting, to the base station, second information for updating a weight of a second partial neural network corresponding to the base station based on a result of the learning; and updating a weight of a first partial neural network corresponding to the UE based on the result of the learning. | 2021-04-15 |
20210110262 | METHOD AND SYSTEM FOR SEMI-SUPERVISED DEEP ANOMALY DETECTION FOR LARGE-SCALE INDUSTRIAL MONITORING SYSTEMS BASED ON TIME-SERIES DATA UTILIZING DIGITAL TWIN SIMULATION DATA - A computer-implemented method for detecting an anomalous operating status of a technical system. A training phase obtains a first set of time-series values generated by a digital twin simulation of the technical system for a regular operating status and a second set of time-series values measured by sensors in an anomalous operating status, and adjusts parameters of a machine learning model for detecting the regular operating status and for discriminating data samples of the regular operating status from data samples of the anomalous operating status to generate a trained machine learning model. A monitoring phase obtains a set of multivariate time-series values measured by the sensors, calculates an anomaly score value for determining whether the technical system is in an anomalous operating status based on the obtained set of multi-variate time-series values and the trained machine learning model, and outputs a signal including information on the determined anomalous operating status. | 2021-04-15 |
20210110263 | ANONYMIZED TIME-SERIES GENERATION FROM RECURRENT NEURAL NETWORKS - An output time-series of a cell of a neural network is captured. A subset of a set of data points of the output time-series is consolidated into a singular data point. The singular data point is fitted in a data representation to form a quantified aggregated data point. The quantified aggregated data point is included in an intermediate time-series. Using the intermediate time-series as an input at an intermediate layer of the neural network, an anonymized output time-series is produced from the neural network. | 2021-04-15 |
20210110264 | METHODS AND APPARATUS TO FACILITATE EFFICIENT KNOWLEDGE SHARING AMONG NEURAL NETWORKS - Methods, apparatus, systems and articles of manufacture are disclosed to facilitate knowledge sharing among neural networks. An example apparatus includes a trainer to train, at a first computing system, a first Bayesian Neural Network (BNN) on a first subset of training data to generate a first weight distribution, and train, at a second computing system, a second BNN on a second subset of the training data to generate a second weight distribution, the second subset of the training data different from the first subset of training data. The example apparatus includes a knowledge sharing controller to generate a third BNN based on the first weight distribution and the second weight distribution. | 2021-04-15 |
20210110265 | METHODS AND APPARATUS TO COMPRESS WEIGHTS OF AN ARTIFICIAL INTELLIGENCE MODEL - Methods, apparatus, systems, and articles of manufacture to compress weights of an artificial intelligence model are disclosed. An example apparatus includes a channel manipulator to manipulate weights of a channel of a trained model to generate a manipulated channel; a comparator to determine a similarity between (a) at least one of the channel or the manipulated channel and (b) a reference channel; and a data packet generator to, when the similarity satisfies a similarity threshold, generate a compressed data packet based on a difference between (a) the at least one of the channel or the manipulated channel and (b) the reference channel. | 2021-04-15 |
20210110266 | CONTEXT-AWARE CONVERSATION THREAD DETECTION FOR COMMUNICATION SESSIONS - A computer system identifies threads in a communication session. A feature vector is generated for a message in a communication session, wherein the feature vector includes elements for features and contextual information of the message. The message feature vector and feature vectors for a plurality of threads are processed using machine learning models each associated with a corresponding thread to determine a set of probability values for classifying the message into at least one thread, wherein the threads include one or more pre-existing threads and a new thread. A classification of the message into at least one of the threads is indicated based on the set of probability values. Classification of one or more prior messages is adjusted based on the message's classification. Embodiments of the present invention further include a method and program product for identifying threads in a communication session in substantially the same manner described above. | 2021-04-15 |
20210110267 | CONFIGURABLE MAC FOR NEURAL NETWORK APPLICATIONS - Certain aspects of the present disclosure are directed to methods and apparatus for configuring a multiply-accumulate (MAC) block in an artificial neural network. A method generally includes receiving, at a neural processing unit comprising one or more logic elements, at least one input associated with a use-case of the neural processing unit; obtaining a set of weights associated with the at least one input; selecting a precision for the set of weights; modifying the set of weights based on the selected precision; and generating an output based, at least in part, on the at least one input, the modified set of weights, and an activation function. | 2021-04-15 |
20210110268 | LEARNED THRESHOLD PRUNING FOR DEEP NEURAL NETWORKS - A method for pruning weights of an artificial neural network based on a learned threshold includes determining a pruning threshold for pruning a first set of pre-trained weights of multiple pre-trained weights based on a function of a classification loss and a regularization loss. The first set of pre-trained weights is pruned in response to a first value of each pretrained weight in the first set of pre-trained weights being greater than the pruning threshold. A second set of pre-trained weights of the multiple pre-trained weights is fine-tuned or adjusted in response to a second value of each pre-trained weight in the second set of pre-trained weights being greater than the pruning threshold. | 2021-04-15 |
20210110269 | NEURAL NETWORK DENSE LAYER SPARSIFICATION AND MATRIX COMPRESSION - Neural network dense layer sparsification and matrix compression is disclosed. An example of an apparatus includes one or more processors; a memory to store data for processing, including data for processing of a deep neural network (DNN) including one or more layers, each layer including a plurality of neurons, the one or more processors to perform one or both of sparsification of one or more layers of the DNN, including selecting a subset of the plurality of neurons of a first layer of the DNN for activation based at least in part on locality sensitive hashing of inputs to the first layer; or compression of a weight or activation matrix of one or more layers of the DNN, including detection of sparsity patterns in a matrix of the first layer of the DNN based at least in part on locality sensitive hashing of patterns in the matrix. | 2021-04-15 |
20210110270 | METHOD AND APPARATUS WITH NEURAL NETWORK DATA QUANTIZING - A neural network data quantizing method includes: obtaining local quantization data by firstly quantizing, based on a local maximum value for each output channel of a current layer of a neural network, global recovery data obtained by recovering output data of an operation of the current layer based on a global maximum value corresponding to a previous layer of the neural network; storing the local quantization data in a memory to perform an operation of a next layer of the neural network; obtaining global quantization data by secondarily quantizing, based on a global maximum value corresponding to the current layer, local recovery data obtained by recovering the local quantization data based on the local maximum value for each output channel of the current layer; and providing the global quantization data as input data for the operation of the next layer. | 2021-04-15 |
20210110271 | TRAINING ACTION SELECTION NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a policy neural network. The policy neural network is used to select actions to be performed by an agent that interacts with an environment by receiving an observation characterizing a state of the environment and performing an action from a set of actions in response to the received observation. A trajectory is obtained from a replay memory, and a final update to current values of the policy network parameters is determined for each training observation in the trajectory. The final updates to the current values of the policy network parameters are determined from selected action updates and leave-one-out updates. | 2021-04-15 |
20210110272 | CROSS BATCH NORMALIZATION - Techniques for training a machine learning model are described herein. For example, the techniques may include implementing a cross batch normalization layer that generates a cross batch normalization layer output based on a first layer output during training of the neural network. The training may be based on a local batch of training examples of a global batch including the local batch and at least one remote batch of training examples. The cross batch normalization layer output may include normalized components of the first layer output determined based on global normalization statistics for the global batch. Such techniques may be used to train a neural network over distributed machines by synchronizing batches between such machines. | 2021-04-15 |
20210110273 | APPARATUS AND METHOD WITH MODEL TRAINING - A processor-implemented model training method and apparatus are provided. The method calculates an entropy of each of a plurality of previously trained models based on training data, selects a previously trained model from the plurality of previously trained models based on the calculated entropy, and trains a target model, distinguished from the plurality of previously trained models, based on the training data and the selected previously trained model. | 2021-04-15 |
20210110274 | TRAINING A NEURAL NETWORK USING PERIODIC SAMPLING OVER MODEL WEIGHTS - A computer-implemented method includes: initializing model parameters for training a neural network; performing a forward pass and backpropagation for a first minibatch of training data; determining a new weight value for each of a plurality of nodes of the neural network using a gradient descent of the first minibatch; for each determined new weight value, determining whether to update a running mean corresponding to a weight of a particular node; based on a determination to update the running mean, calculating a new mean weight value for the particular node using the determined new weight value; updating the weight parameters for all nodes based on the calculated new mean weight values corresponding to each node; assigning the running mean as the weight for the particular node when training on the first minibatch is completed; and reinitializing running means for all nodes at a start of training a second minibatch. | 2021-04-15 |
20210110275 | SYSTEM AND METHOD OF MACHINE LEARNING USING EMBEDDING NETWORKS - Systems and methods of generating interpretive data associated with data sets. Embodiments of systems may be for adapting Grad-CAM methods for embedding networks. The system includes a processor and a memory. The memory stores processor-executable instructions that, when executed, configure the processor to: obtain a subject data set; generate a feature embedding based on the subject data set; determine an embedding gradient weight based on a prior-trained embedding network and the feature embedding associated with the subject data set, the prior-trained embedding network defined based on a plurality of embedding gradient weights respectively corresponding to a feature map generated based on a plurality of training samples, and wherein the embedding gradient weight is determined based on querying a feature space for the feature embedding associated with the subject data set; and generate signals for communicating interpretive data associated with the embedding gradient weight. | 2021-04-15 |
20210110276 | SEARCH METHOD, DEVICE AND STORAGE MEDIUM FOR NEURAL NETWORK MODEL STRUCTURE - A search method for a neural network model structure, includes: generating an initial generation population of network model structure based on multi-objective optimization hyper parameters, as a current generation population of network model structure; performing selection and crossover on the current generation population of network model structure; generating a part of network model structure based on reinforcement learning mutation, and generating a remaining part of network model structure based on random mutation on the selected and crossed network model structure; generating a new population of network model structure based on the part of network model structure generated by reinforcement learning mutation and the remaining part of network model structure generated by random mutation; and searching a next generation population of network model structure based on the current generation population of network model structure and the new population of network model structure. | 2021-04-15 |
20210110277 | TEXTUAL ENTAILMENT - Examples of a textual entailment generation system are provided. The system obtains a query from a user and implements an artificial intelligence component to identify a premise, a word index, and a premise index associated with the query. The system may implement a first cognitive learning operation to determine a plurality of hypothesis and a hypothesis index corresponding to the premise. The system may generate a confidence index for each of the plurality of hypothesis based on a comparison of the hypothesis index with the premise index. The system may determine an entailment value, a contradiction value, and a neutral entailment value based on the confidence index for each of the plurality of hypothesis. The system may generate an entailment result relevant for resolving the query comprising the plurality of hypothesis along with the corresponding entailed output index. | 2021-04-15 |
20210110278 | ENTERPRISE KNOWLEDGE GRAPH - Examples described herein generally relate to a computer system for generating a knowledge graph storing a plurality of entities and to displaying a topic page for an entity in the knowledge graph. The computer system performs a mining of source documents within an enterprise intranet to determine a plurality of entity names. The computer system generates an entity record within the knowledge graph for a mined entity name based on an entity schema and the source documents. The entity record includes attributes aggregated from the source documents. The computer system receives a curation action on the entity record from a first user. The computer system updates the entity record based on the curation action. The computer system displays an entity page including at least a portion of the attributes to a second user based on permissions of the second user to view the source documents. | 2021-04-15 |
20210110279 | ANALYTICS GATHERING OF DISCOVERED AND RESEARCHED INSIGHTS - A computer-implemented method for optimizing research of an abstracted issue with a plurality of analytics engines is described. The method includes receiving a problem report at an analytics engine controller. The problem report includes symptoms of a problem in a computing system. The analytics engine forwards the problem report to a research optimization engine that abstracts one or more issues associated with the problem based on the symptoms of the problem. The research optimization engine then obtains anomaly research data for one or more of diagnosing the problem and fixing the problem. The anomaly research data is based on the one or more abstracted issues. The research optimization engine associates the abstracted issues with corresponding portions of the anomaly research data, then assigns the abstracted issues and corresponding portions of the anomaly research data to at least one of the plurality of analytics engines. | 2021-04-15 |
20210110280 | Integrating Geoscience Data to Predict Formation Properties - A method includes receiving well log data for a plurality of wells. A flag is generated based at least partially on the well log data. The wells are sorted into groups based at least partially on the well log data, the flag, or both. A model is built for each of the wells based at least partially on the well log data, the flag, and the groups. | 2021-04-15 |
20210110281 | GENETIC ALGORITHM WITH DETERMINISTIC LOGIC - In a method for applying deterministic logic to select resources for resource genomes in a genetic algorithm, a logic engine identifies resources associated with an objective and an overall task population to be completed by one or more of the identified resources. The logic engine then selects a deterministic logical framework from one or more deterministic logical frameworks based on the objective. Following the selection of a deterministic logical framework, the logic engine selects one or more resources from the one or more identified resources based on the selected deterministic logical framework. The logic engine compiles the one or more selected resources into a resource genome, assigns one or more tasks from the task population to the one or more selected resources, and sends instructions to the one or more selected resources to execute the one or more tasks. The logic engine determines a value score for the resource genome. | 2021-04-15 |
20210110282 | DYNAMIC CONFIGURATION OF ANOMALY DETECTION - The disclosed embodiments generate a plurality of anomaly detector configurations and compare results generated by these anomaly detectors to a reference result set. The reference result set is generated by a trained model. A correlation between each result generated by the anomaly detectors and the result set is compared to select an anomaly detector configuration that provides results most similar to those of the trained model. In some embodiments, data defining the selected configuration is then communicated to a product installation. The product installation instantiates the defined anomaly detector and analyzes local events using the instantiated detector. In some other embodiments, the defined anomaly detector is instantiated by the same system that selects the anomaly detector, and thus in these embodiments, the anomaly detector configuration is not transmitted from one system to another. | 2021-04-15 |
20210110283 | FARM DATA ANNOTATION AND BIAS DETECTION - One embodiment provides a method, including: obtaining information related to farming activities of a farmer; predicting an annotation category for the information, wherein the annotation category identifies a topic of the information; selecting an annotator for annotating the information based upon the annotation category, wherein the selecting comprises utilizing (i) a social proximity constraint identifying a social connection between the farmer and another farmer and (ii) a farm signature constraint identifying a similarity of the farmer to another farmer; assigning the annotator to annotate the obtained information; and receiving annotations for the information | 2021-04-15 |
20210110284 | METHOD AND SYSTEM FOR AUTOMATIC ERROR DIAGNOSIS IN A TEST ENVIRONMENT - A method for automatic error diagnosis in a test environment is provided. The method comprises the step of providing a plurality of test logs associated with known types of failures, each comprising a set of files. The method further comprises the step of arranging the plurality of test logs in a defect database. Moreover, the method comprises the step of transforming the set of files of the plurality of test logs into vectors adapted to be fed into a machine learning model. | 2021-04-15 |
20210110285 | FEATURE SELECTION USING SOBOLEV INDEPENDENCE CRITERION - A machine learning system that implements Sobolev Independence Criterion (SIC) for feature selection is provided. The system receives a dataset including pairings of stimuli and responses. Each stimulus includes multiple features. The system generates a correctly paired sample of stimuli and responses from the dataset by pairing stimuli and responses according to the pairings of stimuli and responses in the dataset. The system generates an alternatively paired sample of stimuli and responses from the dataset by pairing stimuli and responses differently than the pairings of stimuli and responses in the dataset. The system determines a witness function and a feature importance distribution across the features that optimizes a cost function that is evaluated based on the correctly paired and alternatively paired samples of the dataset. The system selects one or more features based on the computed feature importance distribution. | 2021-04-15 |
20210110286 | DETECTING AND IMPROVING CONTENT RELEVANCY IN LARGE CONTENT MANAGEMENT SYSTEMS - A method, a computer system, and a computer program product for managing content relevancy is provided. Embodiments of the present invention may include collecting and analyzing a plurality of data, wherein the plurality of data includes document data, document access data and user data. Embodiments of the present invention may include retrieving topic model content based on the plurality of data. Embodiments of the present invention may include building a machine learning (ML) model to determine one or more topics contained in the topic model content. Embodiments of the present invention may include generating a heatmap based on the user data. Embodiments of the present invention may include building a content relevancy model (CRM) based on the ML model and the heatmap. Embodiments of the present invention may include determining an action state for the document data. Embodiments of the present invention may include storing the CRM. | 2021-04-15 |
20210110287 | Causal Reasoning and Counterfactual Probabilistic Programming Framework Using Approximate Inference - A computer implemented method of performing inference on a generative model,
| 2021-04-15 |
20210110288 | ADAPTIVE MODEL INSIGHTS VISUALIZATION ENGINE FOR COMPLEX MACHINE LEARNING MODELS - The present disclosure relates to systems, methods, and non-transitory computer-readable media for utilizing a parameterized notebook to adaptively generate visualizations regarding machine-learning models. In particular, the disclosed systems can generate a parameterized notebook based on a user-defined visualization recipe and provide parameter values that correspond to the machine-learning model to the parameterized notebook. Upon execution of the user-defined visualization recipe via the parameterized notebook, the disclosed systems can extract visualization data corresponding to the machine-learning model from the parameterized notebook. In addition, the disclosed systems can generate visualizations based on the visualization data and provide the generated visualizations for display in a graphical user interface. | 2021-04-15 |
20210110289 | GRAPHICAL INTERACTIVE MODEL SPECIFICATION GUIDELINES FOR STRUCTURAL EQUATION MODELING DESIGNS - The computing device receives a first user input request to modify a structural equation model (SEM) in a graphical user interface. The modification of the SEM includes modifying one or more SEM path diagram elements. The computing device detects whether a first SEM path diagram element is modified responsive to the received first user input request. Based on the detection, the computing device determines whether the modification violates a first set of SEM rules, a second set of SEM rules, or one or more launch conditions prior to initiating execution of the SEM. Based on determining a violation of the SEM rules or the launch conditions or that there was not a violation, the computing device displays a graphical indicator for indicating a fatal error for the SEM modification, a warning error for the SEM modification, or a valid SEM modification. | 2021-04-15 |
20210110290 | SUPERCONDUCTING CIRCUIT STRUCTURE, SUPERCONDUCTING QUANTUM CHIP AND SUPERCONDUCTING QUANTUM COMPUTER - A superconducting circuit structure, a superconducting quantum chip, and a superconducting quantum computer are provided, which relate to the field of quantum computing. The superconducting circuit structure includes: at least two qubits; a connector, coupled with the two qubits respectively, to realize transversal coupling with each of the two qubits; and a coupler, coupled with the two qubits respectively, to realize longitudinal coupling with each of the two qubits. Therefore, the σ | 2021-04-15 |
20210110291 | CAPACITIVELY-SHUNTED ASYMMETRIC DC-SQUID FOR QUBIT READOUT AND RESET - A tunable resonator is formed by shunting a set of asymmetric DC-SQUIDs with a capacitive device. An asymmetric DC-SQUID includes a first Josephson junction and a second Josephson junction, where the critical currents of the first and second Josephson junctions are different. A coupling is formed between the tunable resonator and a qubit such that the capacitively-shunted asymmetric DC-SQUIDs can dispersively read a quantum state of the qubit. An external magnetic flux is set to a first value and applied to the tunable resonator. A first value of the external magnetic flux causes the tunable resonator to tune to a first frequency within a first frequency difference from a resonance frequency of the qubit, the tunable resonator tuning to the first frequency causes active reset of the qubit. | 2021-04-15 |
20210110292 | Method for learning from a compression/decompression context and corresponding device, system and computer program product - The invention relates to a method for learning a compression/decompression context comprising at least one compression/decompression rule intended to be used by a compressor device and a decompressor device to compress and decompress a first data stream transmitted via at least one first communication link, respectively. In such a method at least one device, among the compressor device, the decompressor device and a third device, carries out the following steps:
| 2021-04-15 |
20210110293 | AUTOMATED PROCESS EXECUTION BASED ON EVALUATION OF MACHINE LEARNING MODELS - The present disclosure relates to computer-implemented methods, software, and systems for utilizing tools and techniques for identifying process rules for automated execution of instances of a process workflow. One example method includes extracting rules from a machine learning model for prediction of execution results of process workflow instances. Metrics defining coverage and accuracy of the rules are calculated. The rules are evaluated according to the metrics and are reduced to a first set of rules that are provided for further evaluation. A rule from the first set of rules is determined to be incorporated into process rules defined for the process workflow at a process execution engine. The process rules associated with execution of the process workflow are updated to include the first rule and to generate a process result automatically according to the first rule when the instance complies with prerequisites defined at the first rule. | 2021-04-15 |
20210110294 | SYSTEMS AND METHODS FOR KEY FEATURE DETECTION IN MACHINE LEARNING MODEL APPLICATIONS USING LOGISTIC MODELS - Systems and methods are disclosed related to the identification of key features among features input to a complex predictive model. Logistic models may be created for each of a number of defined clusters of training data used to train the complex predictive model. Coefficients of each logistic model may be analyzed to identify key features that contribute to predictions made by the logistic models. Performance of the logistic models may be compared to that of the complex model to validate the logistic models. When a prediction is made for a given student by the complex predictive model, the student may be assigned to a cluster/by identifying the cluster center having the shortest Euclidean distance to the feature data associated with the student. Key features associated with the assigned cluster may be used as a basis for generating a recommendation for the reducing a risk level of the student. | 2021-04-15 |
20210110295 | AUTO-TUNING OF COMPARISON FUNCTIONS - A method for relating different types of records. The method may include providing comparison functions, wherein each comparison function corresponds to a semantical class, and wherein a computational cost is associated with each comparison function. The method may include determining one or more attribute pairs between the different types of records. The method may include sorting the comparison functions according to a determined accuracy. The method may include selecting a set of comparison functions associated with semantical classes according to a predefined rule. The method may include determining a total computational cost based on the computational cost of the selected set of comparison functions. The method may include determining whether two or more records are related using the selected set of comparison functions. The method may include relating the two or more records. The method may include determining a rate of false negative records. | 2021-04-15 |
20210110296 | DATA ACCESS CONTROL AND WORKLOAD MANAGEMENT FRAMEWORK FOR DEVELOPMENT OF MACHINE LEARNING (ML) MODELS - Methods, systems, and computer-readable storage media for providing a software system to each customer in a set of customers, each customer being associated with a customer system in a set of customer systems, the software system including a set of views in a data science pool, each of the views in the set of views providing a data set based on production data of respective customers; for each customer system: accessing at least one data set within the customer system through a released view provided in a DMZ within the customer system and corresponding to a respective view in the set of views, and triggering training of a ML model in the DMZ to provide and results; and selectively publishing the ML model for consumption by each of the customers in the set of customers based on a set of results comprising the results from each customer system. | 2021-04-15 |
20210110297 | MEMORY SUB-SYSTEM WITH A VIRTUALIZED BUS AND INTERNAL LOGIC TO PERFORM A MACHINE LEARNING OPERATION - A memory component includes a memory region to store a machine learning model and input data and another memory region to store host data from a host system. A controller can be coupled to the memory component and can include in-memory logic to perform a machine learning operation by applying the machine learning model to the input data to generate an output data. A bus can receive additional data from the host system and a decoder can receive the additional data from the bus and can transmit the additional data to the other memory region or the in-memory logic of the controller based on a characteristic of the additional data. | 2021-04-15 |
20210110298 | INTERACTIVE MACHINE LEARNING - A computer-implemented method of interactive machine learning in which a user is provided with predicted results from a trained machine learning model. The user can take the predicted results and either: i) adjust the predicted results an input the adjusted results as new data; or ii) adjust the predicted data to retrain the model. | 2021-04-15 |
20210110299 | INTERACTIVE MACHINE LEARNING - A computer-implemented method of interactive machine learning in which a user is provided with predicted results from a trained machine learning model. The user can take the predicted results and adjust the predicted data to retrain the model. | 2021-04-15 |
20210110300 | REINFORCEMENT LEARNING IN ROBOTIC PROCESS AUTOMATION - Reinforcement learning may be used to train machine learning (ML) models for robotic process automation (RPA) that are implemented by robots. A policy network may be employed, which learns to achieve a definite output by providing a particular input. In other words, the policy network informs the system whether it is getting closer to the winning state. The policy network may be refined by the robots automatically or with the periodic assistance of a human in order to reach the winning state, or to achieve a more optimal winning state. Robots may also create other robots that utilize reinforcement learning. | 2021-04-15 |
20210110301 | RECONFIGURABLE WORKBENCH PIPELINE FOR ROBOTIC PROCESS AUTOMATION WORKFLOWS - A reconfigurable workbench pipeline for robotic process automation (RPA) workflows is disclosed. Different workbench pipelines may be built for different users. For instance, a global workflow (e.g., a receipt extractor) may be built and used initially, but this workflow may not work optimally or at all for a certain user or a certain task. A machine learning (ML) model may be employed, potentially with a human-in-the-loop, to specialize the global workflow for a given task. | 2021-04-15 |
20210110302 | RESOURCE-AWARE AUTOMATIC MACHINE LEARNING SYSTEM - The present disclosure relates to a system, a method, and a product for optimizing hyper-parameters for generation and execution of a machine-learning model under constraints. The system includes a memory storing instructions and a processor in communication with the memory. When executed by the processor, the instructions cause the processor to obtain input data and an initial hyper-parameter set; for an iteration, to build a machine learning model based on the hyper-parameter set, evaluate the machine learning model based on the target data to obtain a performance metrics set, and determine whether the performance metrics set satisfies the stopping criteria set. If yes, the instructions cause the processor to perform an exploitation process to obtain an optimal hyper-parameter set, and exit the iteration; if no, perform an exploration process to obtain a next hyper-parameter set, and perform a next iteration with using the next hyper-parameter set as the hyper-parameter set. | 2021-04-15 |
20210110303 | METHOD, APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM FOR TRAINING USER CLICK MODEL - A method, an apparatus, an electronic device and a storage medium for training a user click model, which relate to the artificial intelligence field, are disclosed. is the method may include: collecting a plurality of pieces of behavior data from a log database of users on a network, each piece of behavior data including a user's feedback information regarding resources in the network within a preset time period; generating a plurality of resource access features respectively corresponding to the plurality of pieces of behavior data, based on a pre-created header resource library and the plurality of pieces of behavior data; and training the user click model with the plurality of resource access features. The technical solution provides a lossless modeling manner which, compared to the existing modeling manners, may effectively optimize the precision and improve the accuracy of the user click model. | 2021-04-15 |
20210110304 | OPERATIONAL SUPPORT SYSTEM AND METHOD - A system performs operation monitoring in which a learning model in operation is monitored. In the operation monitoring, the system performs a first certainty factor comparison to determine, each time the learning model in operation to which input data is input outputs output data, whether or not a certainty factor of the learning model is below a first threshold. In a case where a result of the first certainty factor comparison is true, the system replaces the learning model in operation with any of candidate learning models having a certainty factor higher than the certainty factor of the learning model in operation in which the result of true is obtained among one or more candidate learning models (one or more learning models each having a version different from a version of the learning model in operation), as a learning model of an operation target. | 2021-04-15 |
20210110305 | DEVICE MONITORING SYSTEM AND METHOD - The present invention relates to a computer-implemented method of monitoring the performance of a computing device. The method comprises determining an actual device usage for processing a plurality of requests; obtaining a predicted device usage for processing the plurality of requests by inputting a request volume of the plurality of requests into a model of the operation of the computing device; comparing the actual device usage and the predicted device usage; selecting a margin of error for the predicted device usage; and raising an alert if the actual device usage is greater than the predicted device usage and the actual device usage is not within the margin of error. | 2021-04-15 |
20210110306 | META-TRANSFER LEARNING VIA CONTEXTUAL INVARIANTS FOR CROSS-DOMAIN RECOMMENDATION - Systems, apparatuses, methods, and computer-readable media are provided to alleviate data sparsity in cross-recommendation systems. In particular, some embodiments are directed to a recommendation framework that addresses data sparsity and data scalability challenges seamlessly by meta-transfer learning contextual invariances cross domain, e.g., from dense source domain to sparse target domain. Other embodiments may be described and/or claimed. | 2021-04-15 |
20210110307 | ELECTRON DENSITY ESTIMATION METHOD, ELECTRON DENSITY ESTIMATION APPARATUS, AND RECORDING MEDIUM - An electron density estimation method includes calculating a first electron density by inputting first input data to an electron density estimator, performing a numerical simulation using the first input data and the first electron density to calculate a second electron density, the numerical simulation being processing in which the first input data and the first electron density are set as initial values and in which electron density update processing using a density functional method is performed one or more times, the second electron density being not a convergence value obtained by the density functional method, performing learning for the estimator by calculating a parameter for the estimator, the parameter minimizing a difference between the first and second electron densities, obtaining, from a database, and inputting second input data to the estimator, in which the parameter is set, to estimate a third electron density, and outputting the third electron density. | 2021-04-15 |
20210110308 | METHODS AND APPARATUS TO FIND OPTIMIZATION OPPORTUNITIES IN MACHINE-READABLE INSTRUCTIONS - Methods, apparatus, systems and articles of manufacture are disclosed for finding optimization opportunities in machine-readable instructions, the apparatus comprising, a cluster creator to utilize a semantic similarity model to create a first cluster of semantically similar machine-readable instruction snippets selected from a set of machine-readable instructions, a combination generator to identify a first combination of a subset of the semantically similar machine-readable instruction snippets from the first cluster of semantically similar machine-readable instruction snippets, and a snippet analyzer to utilize a syntactic similarity model to determine a syntactic similarity of the first combination of a subset of the semantically similar machine-readable instruction snippets from the first cluster of semantically similar machine-readable instruction snippets. | 2021-04-15 |
20210110309 | Data Processing System with Machine Learning Engine to Provide Output Generating Functions - Systems, methods, computer-readable media, and apparatuses for identifying and executing one or more interactive condition evaluation tests to generate an output are provided. In some examples, user information may be received by a system and one or more interactive condition evaluation tests may be identified. An instruction may be transmitted to a computing device of a user and executed on the computing device to enable functionality of one or more sensors that may be used in the identified tests. A user interface may be generated including instructions for executing the identified tests. Upon initiating a test, data may be collected from one or more sensors in the computing device. The data collected may be transmitted to the system and may be processed using one or more machine learning datasets to generate an output. | 2021-04-15 |
20210110310 | METHODS AND APPARATUS TO VERIFY TRAINED MODELS IN AN EDGE ENVIRONMENT - Methods and apparatus to verify trained models in edge environments are disclosed. An example apparatus to validate a trained model in an edge environment includes an attestation verifier to determine an attestation score of the model received at a first appliance, the attestation score calculated at a second appliance different from the first appliance, a comparator to compare the attestation score to a threshold, a validator to validate the model based on the comparison, and an executor to at least one of execute or deploy the model based on the validation. | 2021-04-15 |
20210110311 | WATERMARK UNIT FOR A DATA PROCESSING ACCELERATOR - In one embodiment, a computer-implemented method performed by a data processing (DP) accelerator, the method includes receiving, at the DP accelerator, first data representing a set of training data from a host processor and performing training of an artificial intelligence (AI) model based on the set of training data within the DP accelerator. The method further includes implanting, by the DP accelerator, a watermark within the trained AI model and transmitting second data representing the trained AI model having the watermark implanted therein to the host processor. In an embodiment, the method further includes receiving a pre-trained machine learning model; and performing training for the pre-trained AI model based on the set of training data within the DP accelerator. | 2021-04-15 |
20210110312 | METHOD AND SYSTEM FOR ARTIFICAL INTELLIGENCE MODEL TRAINING USING A WATERMARK-ENABLED KERNEL FOR A DATA PROCESSING ACCELERATOR - In one embodiment, a computer-implemented method performed by a data processing (DP) accelerator, includes receiving, at the DP accelerator, first data representing a set of training data from a host processor; receiving, at the DP accelerator, a watermark kernel from the host processor; and executing the watermark kernel within the DP accelerator on an artificial intelligence (AI) model. The watermark kernel, when executed, is configured to: generate a watermark, train the AI model using the set of training data, and implant the watermark within the AI model during training of the AI model. The DP accelerator then transmits second data representing the trained AI model having the watermark implanted therein to the host processor. In an embodiment, the method further includes receiving a pre-trained AI model and the training is performed for the pre-trained AI model. | 2021-04-15 |
20210110313 | COMPUTER-BASED SYSTEMS, COMPUTING COMPONENTS AND COMPUTING OBJECTS CONFIGURED TO IMPLEMENT DYNAMIC OUTLIER BIAS REDUCTION IN MACHINE LEARNING MODELS - Systems and methods include processors for receiving training data for a user activity; receiving bias criteria; determining a set of model parameters for a machine learning model including: (1) applying the machine learning model to the training data; (2) generating model prediction errors; (3) generating a data selection vector to identify non-outlier target variables based on the model prediction errors; (4) utilizing the data selection vector to generate a non-outlier data set; (5) determining updated model parameters based on the non-outlier data set; and (6) repeating steps (1)-(5) until a censoring performance termination criterion is satisfied; training classifier model parameters for an outlier classifier machine learning model; applying the outlier classifier machine learning model to activity-related data to determine non-outlier activity-related data; and applying the machine learning model to the non-outlier activity-related data to predict future activity-related attributes for the user activity. | 2021-04-15 |
20210110314 | BROADCAST IDENTIFIER BASED NOTIFICATION - A computer implemented method includes receiving a broadcast identifier at a reservation system that was broadcast from a user mobile device via a wireless transmission, generating a notification that a user associated with the user mobile device has arrived, and providing the notification to a user interface to indicate that the user has arrived. The broadcast identifier may be a MAC address received via WiFi received by a router and forwarded to the reservation system. | 2021-04-15 |
20210110315 | COMPATIBILITY OF RIDE HAILING PASSENGERS - Disclosed are embodiments for determining compatibility between ride hailing passengers. Depending on a compatibility between two passengers, the two passengers are conditionally assigned to a common vehicle. The compatibility is determined by comparing a set of largest probability states of a first of the two passengers to corresponding states of the second passenger. A compatibility score is then determined based on probabilities that the second passenger shares the set of states with the first passenger. | 2021-04-15 |
20210110316 | PERSONAL AND/OR TEAM LOGISTICAL SUPPORT APPARATUS, SYSTEM, AND METHOD - Disclosed are a method, a device, a system and/or a manufacture of personal and/or team logistical support. In one embodiment, a system for geospatial reminder and documentation includes a server communicatively coupled to a device (e.g., a mobile device, a wearable device). A spatial documentation routine of the server that receives a documentation placement request that includes a documentation content data and a first location data from the device to generate a spatial documentation data. A documentation awareness routine of the server receives a second location data from the device and determines a second coordinate of the second location data is within a threshold distance of the first coordinate. The server transmits a first indication instruction to trigger an awareness indicator on the device such as a vibration, to alert the user to the documentation in context. A documentation retrieval routine may then respond to requests for the documentation. | 2021-04-15 |
20210110317 | SUMMARIZING BUSINESS PROCESS MODELS - One embodiment provides a method, including: obtaining a business process model representing a process flow having a plurality of steps for performing a business process, the business process model being a graphical representation of the process flow and including geometrical shapes representing activities of the process flow and edges representing a temporal ordering of the activities of the process flow; identifying important activities of the business process model; and generating a summary business process model from the business process model, wherein the summary business process model comprises nodes representing the important activities and excludes other nodes included within the business process model. | 2021-04-15 |
20210110318 | AUTOMATIC ANALYSIS, PRIORITIZATION, AND ROBOT GENERATION FOR ROBOTIC PROCESS AUTOMATION - Systems and methods for analyzing, prioritizing, and potentially automatically generating robots implementing processes and/or process flows for robotic process automation (RPA) are disclosed. Artificial intelligence (AI) may be used to analyze business processes and/or process flows and look for possible candidates for automation or improvement of existing automations. Listeners (e.g., robots, separate software applications, operating system extensions, etc.) may be employed to listen in the background on user computing systems to mine data pertaining to workflow effectiveness and/or to identify new processes and/or process flows that may improve return on investment (ROI) for RPA. For example, when automations are placed into production via robots implementing RPA workflows on user computing systems, listeners may be added to ensure that the process(es) and/or process flow(s) are correctly and accurately performing what they are intended for and/or provide data for automation of new processes and/or process flows. | 2021-04-15 |
20210110319 | FRAMEWORK TO QUANTIFY CYBERSECURITY RISKS AND CONSEQUENCES FOR CRITICAL INFRASTRUCTURE - Methods can include accessing an organizational framework describing an organization, wherein the organizational framework comprises one or more relational matrices defining matrixed interdependencies between business functions, business processes, engineering applications, assets, responsible entities, and facilities of the organization, and using the relational matrices to compute a criticality of an asset, engineering application, or business process, and using a computed criticality to compute a value at risk or a value of a consequence to the organization. | 2021-04-15 |
20210110320 | SYSTEM AND METHOD FOR MANAGING ORGANIZATIONS - A system and method for managing business organizations. A database management system establishes a database, the database including a plurality of database records, each record including values, at a given point in time and for a respective business organization, of some or all of quantitative data associated with financial risk. The database management system establishes a database, the database including a plurality of database records, each database record associated with one of a plurality of separate business organizations, each record including values, at a given point in time and for the respective business organization, of some or all of the quantitative data associated with financial risk. The records are applied to a financial risk prediction model to assess financial risk for the business organization. | 2021-04-15 |
20210110321 | DISPLAY DEVICE AND METHOD FOR CONTROLLING DISPLAY DEVICE - A display device includes a display device and a processor. The processor estimates, based on prediction information enabling a prediction of a visiting state and identification information enabling unique identification of a person, the visiting state of the person identified by the identification information, and transmits, at a time when the person is estimated to visit a location based on the estimated visiting state of the person, the identification information to a terminal, wherein the identification information comprises authentication data for personal authentication of the person identified by the identification information. | 2021-04-15 |
20210110322 | Computer Implemented Method for Detecting Peers of a Client Entity - The present disclosure is related to a field of data analytics using machine learning techniques that discloses system, and a computer implemented method for detecting peers of a client entity in real-time. A peer analyzing system retrieves and shortlists target entities based on transaction data related to target entities and input data received from client entity. Further, the peer analyzing system may generate a plurality of clusters of the shortlisted entities by applying a predefined cluster compliance rule. Furthermore, a query point of plurality of parameters of transaction data for each of the plurality of clusters may be determined based on normalized values of corresponding plurality of parameters determined for the client entity. Further, the peer analyzing system may determine peers of the client entity based on relevance score and proximity score determined for each of the plurality of clusters based on the query point and normalized values of the plurality of parameters. | 2021-04-15 |
20210110323 | OPTIMIZING CHARGING, FUELING, AND PARKING OVERHEADS OF FLEET VEHICLES IN A MAAS ARCHITECTURE - A system for managing a fleet of vehicles in a MaaS network includes a scheduling subsystem configured to retrieve vehicle parameters associated with the fleet of vehicles, the vehicle parameters including a range of travel estimate for each of the vehicles in the fleet. The subsystem retrieves infrastructure resource availability information associated with at least one infrastructure resource used by the fleet of vehicles, and historical usage information associated with the at least one infrastructure resource. The subsystem applies a machine learning technique using the vehicle parameters, the infrastructure resource availability information, and the historical usage information to generate a scheduling instruction. The scheduling instruction is communicated to the fleet of vehicles, the scheduling instruction for scheduling usage of the at least one infrastructure resource by one or more of the vehicles in the fleet. | 2021-04-15 |
20210110324 | SALON SUSTAINABILITY SYSTEM - In one embodiment, a salon sustainability resource management system is described. The salon sustainability resource management system includes one or more backwash stations, a sustainability status unit communicatively coupled to each of the one or more backwash stations. The sustainability status unit is configured to generate a virtual display including one or more instances of a current sustainability status associated with a utilization of at least one resource. An enticement unit is configured to activate one or more appraisals associated with the resource utilization at the backwash station. An integrated verdant unit is configured to predict a wholistic sustainability status associated with the current sustainability status at the sustainability status unit associated with each backwash station. | 2021-04-15 |
20210110325 | ON-DEMAND DYNAMIC PROVISIONING OF PARKING SPACES - Dynamic provisioning of parking spaces on-demand can include receiving search requests from first user devices, searching a register based on user types and locations associated with the search requests, and causing search results to be displayed on graphical user interfaces (“GUI”) of the first user devices. Linked user options can be determined with a server based on space assets selected from the search results, user registration profiles, or the search requests. Correspondence can be sent between the first user devices and second user devices associated with users that are linked to users of the first user devices or control access to selected space assets Linked user options can be displayed on GUIs of first and second user devices based on the correspondence and space assets selected Space assets can be provisioned by the server based on provisioning agreements facilitated through the first and second user devices. | 2021-04-15 |
20210110326 | ROUTE-BASED DIGITAL SERVICE MANAGEMENT - Various systems and methods for route-based digital service management are described herein, comprising receiving a request for service, such as a user request for service from a MaaS or digital service provider, estimating digital service usage with respect to the request, determining a MaaS route using the request and the estimated digital service usage, selecting an orchestration strategy comprising a server type using the estimated digital service usage, and scheduling a MaaS vehicle for the determined MaaS route in response to the request using the selected orchestration strategy and the determined candidate MaaS route. | 2021-04-15 |
20210110327 | KEEPING TRACK OF IMPORTANT TASKS - A data processing system including a processor and machine-readable media including instructions for the processor. When executed by the processor, the instructions cause the processor to monitor events in a plurality of communications channels associated with a user, identify monitored events that are determined to be pertinent to the user, sort the identified events by priority to create a prioritized list of events, monitor interactions with the data processing system by the user for a task initiation signal, and, in response to detecting the task initiation signal, cause display of the prioritized list of events to the user on a display device. | 2021-04-15 |
20210110328 | TECHNIQUES FOR CONFIGURING WORKFLOW EVENT PROCESSING AND IDENTIFIER FEDERATION - Event processing techniques for updating a database in real time based on events in a continuous event stream are disclosed. The techniques can update the database to incorporate information from thousands of received events per second. The events can include metrics measuring milestones for an organizational process defined by a user. Moreover, multiple streams can include metrics from many tenants concurrently. The techniques include receiving, from a first user device, information identifying a group identifier for a first action object. The techniques then include assigning the identifier to the first action object and to at least one other action object. The techniques then include transmitting, to a service provider, data identifying the assignment of the group identifier, receiving second information identifying events processed by the service provider, identifying events corresponding to the identifier, and generating a user interface configured to present elements corresponding to the identified events. | 2021-04-15 |
20210110329 | METHOD AND SYSTEM FOR IMPROVEMENT PROFILE GENERATION IN A SKILLS MANAGEMENT PLATFORM - A system and method are presented for improvement profile generation in a skills management platform, using past data and a set of KPIs. Variance calculation is performed with a basic variance formula and these values are used to generate a strand. A strand may be defined as a collection of KPIs, each weighted to show the importance of that KPI for that agent type. KPIs can be selected to generate a strand with, and the strand is generated from those KPIs considering the normalized variance of each KPI. An agent's improvement possibilities may also be determined using the generated strand. | 2021-04-15 |
20210110330 | SKILL-SET SCORE BASED INTELLIGENT CASE ASSIGNMENT SYSTEM - Described embodiments provide systems and methods for routing cases using a skill score. A system receives a self-evaluation skill score of a user for each feature of one or more products for which the user provides support. The system identifies a number of support cases handled by the user. The system determines a case-based skill score of the user for each feature. The system determines a skill score of the user for each feature based on at least the self-evaluation skill score of the user for each feature and the case-based skill score of the user for each feature. The system selects, responsive to receiving a request for support, the user from a plurality of users to support the request based at least on the skill score of the user for the feature. The system routes the request to the user. | 2021-04-15 |
20210110331 | Method and System for Quantifying Workforce Transformation of an Organization - A system for quantifying workforce transformation of an organization includes a server that determines current skills of each individual of the organization. The server determines a first score for the organization based on the current skills to classify the organization into a first level of digital awareness of a plurality of levels of digital awareness. The server predicts based on the first level of digital awareness and the current skills, a skill gap, a learning rate, and future skills for each individual, and recommends a training plan for each individual to acquire the future skills. The server assesses the training plan periodically, to determine a second score for the organization, and classifies based on the second score, the organization into a second level of digital awareness of the plurality of levels of digital awareness, thereby quantifying the workforce transformation of the organization. | 2021-04-15 |
20210110332 | COGNITIVE ACCOUNT STAFFING PLANNER - Staffing is allocated by expressing a risk of violating a service level agreement for a given service line as a function of a number of full-time equivalents allocated to the given service line and a number of service tickets received at the given service line per unit of time per ticket severity level. The number of service tickets are processed by the number of full-time equivalents allocated to the given service line. Risks are summed across a plurality of distinct service lines to generate a total risk. Total risk is minimized by adjusting the number of full time equivalents allocated to each given service line across all services lines subject to a pre-determined reduction in total cost to process all service tickets by the number of full-time equivalents across all service lines and a pre-determined range of a permissible number of full-time equivalents for each service line. | 2021-04-15 |
20210110333 | EX-WAREHOUSING METHOD AND DEVICE - An ex-warehousing method and device relate to an automated logistics technology, and efficiently improve the ex-warehousing efficiency. The ex-warehousing method includes: receiving an order for to-be-ex-warehoused goods during an ex-warehousing process ( | 2021-04-15 |
20210110334 | WAREHOUSE ORDER PICKING OPTIMIZATION SYSTEM AND METHOD - Exemplary system and method embodiments described and shown herein are directed to optimizing warehouse picking operations. Exemplary system and method embodiments employ order allocation optimization and/or order grouping optimization that are individually or collectively usable to allocate and group warehouse orders in a manner that minimizes picker travel and maximizes labor productivity. | 2021-04-15 |