18th week of 2020 patent applcation highlights part 58 |
Patent application number | Title | Published |
20200134439 | MACHINE-LEARNING TECHNIQUES FOR MONOTONIC NEURAL NETWORKS - In some aspects, a computing system can generate and optimize a neural network for risk assessment. The neural network can be trained to enforce a monotonic relationship between each of the input predictor variables and an output risk indicator. The training of the neural network can involve solving an optimization problem under a monotonic constraint. This constrained optimization problem can be converted to an unconstrained problem by introducing a Lagrangian expression and by introducing a term approximating the monotonic constraint. Additional regularization terms can also be introduced into the optimization problem. The optimized neural network can be used both for accurately determining risk indicators for target entities using predictor variables and determining explanation codes for the predictor variables. Further, the risk indicators can be utilized to control the access by a target entity to an interactive computing environment for accessing services provided by one or more institutions. | 2020-04-30 |
20200134440 | MACHINE-LEARNING-BASED ETHICS COMPLIANCE EVALUATION PLATFORM - Methods, systems, and computer-readable storage media for receiving digital content, receiving a set of locales, generating a set of ethics ratings by processing the digital content through a plurality of machine-learning (ML) models to provide a set of ethics ratings, each ML model in the plurality of ML models being specific to a locale of the set of locales, each ethics rating in the set of ethics ratings being specific to a locale of the set of locales, and providing the set of ethics ratings for the digital content for the selected locales to the user. | 2020-04-30 |
20200134441 | MULTI-DOMAIN SERVICE ASSURANCE USING REAL-TIME ADAPTIVE THRESHOLDS - Techniques for adaptive thresholding are provided. First and second data points are received. A plurality of data points are identified, where the plurality of data points corresponds to timestamps associated with the first and second data points. At least one cluster is generated for the plurality of data points based on a predefined cluster radius. Upon determining that the first data point is outside of the cluster, the first data point is labeled as anomalous. A predicted value is generated for the second data point, based on processing data points in the cluster using a machine learning model, and a deviation between the predicted value and an actual value for the second data point is computed. Upon determining that the deviation exceeds a threshold, the second data point is labeled as anomalous. Finally, computing resources are reallocated, based on at least one of the anomalous data points. | 2020-04-30 |
20200134442 | TASK DETECTION IN COMMUNICATIONS USING DOMAIN ADAPTATION - Generally discussed herein are devices, systems, and methods for task classification. A method can include modifying a representation of a source sentence of a source sample from a source corpus to more closely resemble a representation of target sentences of target samples from a target corpus, operating, using a machine learning model trained using the modified representation of the source sentence, with the target sample to generate a task label, the task label indicating whether the target sample includes a task. and causing a personal information manager (PIM) to generate a reminder, based on whether the target sample includes the task. | 2020-04-30 |
20200134443 | STORING NEURAL NETWORKS AND WEIGHTS FOR NEURAL NETWORKS - Systems and methods are disclosed for storing neural networks and weights for neural networks. In some implementations, a method is provided. The method includes storing a plurality of weights of a neural network comprising a plurality of nodes and a plurality of connections between the plurality of nodes. Each weight of the plurality of weights is associated with a connection of the plurality of connections. The neural network comprises a binarized neural network. The method also includes receiving input data to be processed by the neural network. The method further includes determining whether a set of weights of the plurality of weights comprises one or more errors. The method further includes refraining from using the set of weights to process the input data using the neural network in response to determining that the set of weights comprises the one or more errors. | 2020-04-30 |
20200134444 | SYSTEMS AND METHODS FOR DOMAIN ADAPTATION IN NEURAL NETWORKS - A domain adaptation module is used to optimize a first domain derived from a second domain using respective outputs from respective parallel hidden layers of the domains. | 2020-04-30 |
20200134445 | ARCHITECTURE FOR DEEP Q LEARNING - The deep Q learning technique trains weights of an artificial neural network using a number of unique features, including separate target and prediction networks, random experience replay to avoid issues with temporally correlated training samples, and others. A hardware architecture is described that is tuned to perform deep Q learning. Inference cores use a prediction network to determine an action to apply to an environment. A replay memory stores the results of the action. Training cores use a loss function derived from outputs from both the target and prediction networks to update weights of the prediction neural networks. A high speed copy engine periodically copies weights from the prediction neural network to the target neural network. | 2020-04-30 |
20200134446 | SCALABLE ARTIFICIAL INTELLIGENCE MODEL GENERATION SYSTEMS AND METHODS FOR HEALTHCARE - Systems and methods to generate artificial intelligence models with synthetic data are disclosed. An example system includes a deep neural network (DNN) generator to generate a first DNN model using first real data. The example system includes a synthetic data generator to generate first synthetic data from the first real data, the first synthetic data to be used by the DNN generator to generate a second DNN model. The example system includes an evaluator to evaluate performance of the first and second DNN models to determine whether to generate second synthetic data. The example system includes a synthetic data aggregator to aggregate third synthetic data and fourth synthetic data from a plurality of sites to form a synthetic data set. The example system includes an artificial intelligence model deployment processor to deploy an artificial intelligence model trained and tested using the synthetic data set. | 2020-04-30 |
20200134447 | SYNCHRONIZED INPUT FEEDBACK FOR MACHINE LEARNING - A method and system for providing synchronized input feedback, comprising receiving an input event, encoding the input event in an output stream wherein the encoding of the input event is synchronized to a specific event and reproducing the output stream through an output device whereby the encoded input event in the reproduced output stream is imperceptible to the user | 2020-04-30 |
20200134448 | QUANTIZING NEURAL NETWORKS WITH BATCH NORMALIZATION - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training a neural network that has one or more batch normalized neural network layers for use by a quantized inference system. One of the methods includes receiving a first batch of training data; determining batch normalization statistics for the first batch of training data; determining a correction factor from the batch normalization statistics for the first batch of training data and the long-term moving averages of the batch normalization statistics; generating batch normalized weights from the floating point weights for the batch normalized first neural network layer, comprising applying the correction factor to the floating point weights of the batch normalized first neural network layer; quantizing the batch normalized weights; determining a gradient of an objective function; and updating the floating point weights using the gradient. | 2020-04-30 |
20200134449 | TRAINING OF MACHINE READING AND COMPREHENSION SYSTEMS - A method of using a first neural network includes: by the first neural network, receiving a text; by the first neural network, receiving a question concerning the text; and by the first neural network, determining an answer to the question using the text, where the first neural network is trained to answer the question about the text adversarially by a second neural network that is trained to maximize a likelihood of failure of the first neural network to correctly answer questions. | 2020-04-30 |
20200134450 | PREDICTING STORAGE NEED IN A DISTRIBUTED NETWORK - Systems and methods are disclosed for predicting storage need for an acquisition system within a distributed data storage network. An example method may include receiving, from a distributed storage network, node data associated with multiple nodes on the distributed storage network; receiving, from a first node in the distributed storage network, first user data associated with one or more acquisitions at a non-mobile platform; receiving, from a second node in the distributed storage network, second user data associated with one or more acquisitions at a mobile platform; determining, using the node data, first user data, and second user data, an estimated future storage need for each of the multiple nodes; generating a data transition scheme based on the estimated future storage needs; and implementing the data transition scheme into the distributed storage network. | 2020-04-30 |
20200134451 | DATA SPLITTING BY GRADIENT DIRECTION FOR NEURAL NETWORKS - Systems and methods improve the performance of a network that has converged such that the gradient of the network and all the partial derivatives are zero (or close to zero) by splitting the training data such that, on each subset of the split training data, some nodes or arcs (i.e., connections between a node and previous or subsequent layers of the network) have individual partial derivative values that are different from zero on the split subsets of the data, although their partial derivatives averaged over the whole set of training data is close to zero. The present system and method can create a new network by splitting the candidate nodes or arcs that diverge from zero and then trains the resulting network with each selected node trained on the corresponding cluster of the data. Because the direction of the gradient i s different for each of the nodes or arcs that are split, the nodes and their arcs in the new network will train to be different. Therefore, the new network is not at a stationary point. | 2020-04-30 |
20200134452 | CONTROL DEVICE - A control device mounted in a vehicle in which at least one controlled part is controlled based on an output parameter obtained by inputting input parameters to a learned model using a neural network, provided with a parked period predicting part predicting future parked periods of the vehicle and a learning plan preparing part preparing a learning plan for performing relearning of the learned model during the future parked periods based on results of prediction of the future parked periods. | 2020-04-30 |
20200134453 | LEARNING CURVE PREDICTION APPARATUS, LEARNING CURVE PREDICTION METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM - A device for shortening time for learning curve prediction includes a sampler, a learning curve predictor, a learning executor, and a learning curve calculator. The sampler samples a weight parameter of a parameter model which outputs a parameter of a learning curve model of a neural network (NNW) on the basis of a set value of a hyperparameter of the NNW. The learning curve predictor calculates a prediction learning curve of the NNW on the basis of the sampled weight parameter and an actual learning curve of the NNW. The learning executor advances learning in the NNW. The learning curve calculator calculates an actual learning curve resulting from the advance of the learning in the NNW. The learning curve predictor updates the prediction learning curve of the NNW on the basis of the weight parameter sampled before the learning advances and the actual learning curve calculated after the learning advances. | 2020-04-30 |
20200134454 | APPARATUS AND METHOD FOR TRAINING DEEP LEARNING MODEL - An apparatus and a method for training a deep learning model are disclosed. According to the disclosed embodiments, performance of deep learning can be enhanced by performing bidirectional training of learning information on the target dataset on the basis of the source dataset and learning information on the source dataset on the basis of the target dataset. | 2020-04-30 |
20200134455 | APPARATUS AND METHOD FOR TRAINING DEEP LEARNING MODEL - An apparatus and method for training a deep learning model are provided. According to the disclosed embodiments, a deep learning model may be trained using learning data regarding problems of various fields so that there is an ample amount of data on which the model is trained and the performance of the trained model can be improved. | 2020-04-30 |
20200134456 | VIDEO DATA PROCESSING METHOD AND APPARATUS, AND READABLE STORAGE MEDIUM - The present disclosure provides a video data processing method and device, and a readable storage medium, where the technical means is adopted which includes: processing, according to a preset trained deep learning algorithm model, input video data to be processed, to obtain a label vector of the video data; determining, according to a label vector of each music data in a music library and a preset recommendation algorithm, a recommendation score of the label vector of each music data with respect to the label vector of the video data; and taking, according to the each recommendation score, the music data matching the video data as background music. By means of the above, and further a deep learning algorithm model and a recommendation algorithm, the data processing efficiency of finding the background music of the video data in the music library is effectively improved, and the labor cost is reduced. | 2020-04-30 |
20200134457 | METHOD FOR DETERMINING AT LEAST ONE INDICATION OF AT LEAST ONE CHANGE - Provided is a method for determining at least one indication of at least one change, having the steps of receiving at least one input data record having the at least one change and associated data, and determining the at least one indication of the at least one change by applying a learning-based approach to the at least one received input data record. The invention is also directed to a determination unit and a computer program product. | 2020-04-30 |
20200134458 | METHODS, SYSTEMS, AND ARTICLES OF MANUFACTURE TO AUTONOMOUSLY SELECT DATA STRUCTURES - Methods, systems, and articles of manufacture to autonomously select data structures are disclosed. An example apparatus includes an ordinal assigner to assign training code operations to respective first ordered values, and assign candidate data structure types to respective second ordered values, a filter generator to, for a first instruction of the training code operations, generate a Bloom filter bit vector pattern based on (a) one of the first ordered values, (b) one of the second ordered values corresponding to a first one of the candidate data structure types, and (c) a size of the first instruction, a label generator to generate a first model training input feature vector based on the Bloom filter bit vector pattern, data corresponding to the first instruction, and a performance metric of the first one of the candidate data structure types, and a neural network manager to train the data structure selection model with the first model training input feature vector. | 2020-04-30 |
20200134459 | ACTIVATION ZERO-BYPASS AND WEIGHT PRUNING IN NEURAL NETWORKS FOR VEHICLE PERCEPTION SYSTEMS - In one example implementation according to aspects of the present disclosure, a computer-implemented method includes capturing a plurality of images at a camera associated with a vehicle and storing image data associated with the plurality of images to a memory. The method further includes dispatching vehicle perception tasks to a plurality of processing elements of an accelerator in communication with the memory. The method further includes performing, by at least one of the plurality of processing elements, the vehicle perception tasks for the vehicle perception using a neural network, wherein performing the vehicle perception tasks comprises performing an activation bypass for values below a first threshold, and performing weight pruning of synapses and neurons of the neural network based at least in part on a second threshold. The method further includes controlling the vehicle based at least in part on a result of performing the vehicle perception tasks. | 2020-04-30 |
20200134460 | PROCESSING METHOD AND ACCELERATING DEVICE - The present disclosure provides a processing device including: a coarse-grained pruning unit configured to perform coarse-grained pruning on a weight of a neural network to obtain a pruned weight, an operation unit configured to train the neural network according to the pruned weight. The coarse-grained pruning unit is specifically configured to select M weights from the weights of the neural network through a sliding window, and when the M weights meet a preset condition, all or part of the M weights may be set to 0. The processing device can reduce the memory access while reducing the amount of computation, thereby obtaining an acceleration ratio and reducing energy consumption. | 2020-04-30 |
20200134461 | DYNAMIC ADAPTATION OF DEEP NEURAL NETWORKS - Techniques are disclosed for training a deep neural network (DNN) for reduced computational resource requirements. A computing system includes a memory for storing a set of weights of the DNN. The DNN includes a plurality of layers. For each layer of the plurality of layers, the set of weights includes weights of the layer and a set of bit precision values includes a bit precision value of the layer. The weights of the layer are represented in the memory using values having bit precisions equal to the bit precision value of the layer. The weights of the layer are associated with inputs to neurons of the layer. Additionally, the computing system includes processing circuitry for executing a machine learning system configured to train the DNN. Training the DNN comprises optimizing the set of weights and the set of bit precision values. | 2020-04-30 |
20200134462 | PERFORM DESTAGES OF TRACKS WITH HOLES IN A STORAGE SYSTEM BY TRAINING A MACHINE LEARNING MODULE - A machine learning module receives inputs comprising attributes of a storage controller, where the attributes affect performance parameters for performing stages and destages in the storage controller. In response to an event, the machine learning module generates, via forward propagation, an output value that indicates whether to fill holes in a track of a cache by staging data to the cache prior to destage of the track. A margin of error is calculated based on comparing the generated output value to an expected output value, where the expected output value is generated from an indication of whether it is correct to fill holes in a track of the cache by staging data to the cache prior to destage of the track. An adjustment is made of weights of links that interconnect nodes of the plurality of layers via back propagation to reduce the margin of error. | 2020-04-30 |
20200134463 | Latent Space and Text-Based Generative Adversarial Networks (LATEXT-GANs) for Text Generation - According to embodiments, an encoder neural network receives a one-hot representation of a real text. The encoder neural network outputs a latent representation of the real text. A decoder neural network receives random noise data or artificial code generated by a generator neural network from random noise data. The decoder neural network outputs softmax representation of artificial text. The decoder neural network receives the latent representation of the real text. The decoder neural network outputs a reconstructed softmax representation of the real text. A hybrid discriminator neural network receives a first combination of the soft-text and the latent representation of the real text and a second combination of the softmax representation of artificial text and the artificial code. The hybrid discriminator neural network outputs a probability indicating whether the second combination is similar to the first combination. Additional embodiments for utilizing latent representation are also disclosed. | 2020-04-30 |
20200134464 | DYNAMIC BOLTZMANN MACHINE FOR ESTIMATING TIME-VARYING SECOND MOMENT - A computer-implemented method includes employing a dynamic Boltzmann machine (DyBM) to predict a higher-order moment of time-series datasets. The method further includes acquiring the time-series datasets transmitted from a source node to a destination node of a neural network including a plurality of nodes, learning, by the processor, a time-series generative model based on the DyBM with eligibility traces, and obtaining, by the processor, parameters of a generalized auto-regressive heteroscedasticity (GARCH) model to predict a time-varying second-order moment of the times-series datasets. | 2020-04-30 |
20200134465 | METHOD AND APPARATUS FOR RECONSTRUCTING 3D MICROSTRUCTURE USING NEURAL NETWORK - A method of generating a 3D microstructure using a neural network includes configuring an initial 3D microstructure; obtaining a plurality of cross-sectional images by disassembling the initial 3D microstructure in at least one direction of the initial 3D microstructure; obtaining first output feature maps with respect to at least one layer that constitutes the neural network by inputting each of the cross-sectional images to the neural network; obtaining second output feature maps with respect to at least one layer by inputting a 2D original image to the neural network; generating a 3D gradient by applying a loss value to a back-propagation algorithm after calculating the loss value by comparing the first output feature maps with the second output feature maps; and generating a final 3D microstructure based on the 2D original image by reflecting the 3D gradient to the initial 3D microstructure. | 2020-04-30 |
20200134466 | Exponential Modeling with Deep Learning Features - Aspects of the present disclosure enable humanly-specified relationships to contribute to a mapping that enables compression of the output structure of a machine-learned model. An exponential model such as a maximum entropy model can leverage a machine-learned embedding and the mapping to produce a classification output. In such fashion, the feature discovery capabilities of machine-learned models (e.g., deep networks) can be synergistically combined with relationships developed based on human understanding of the structural nature of the problem to be solved, thereby enabling compression of model output structures without significant loss of accuracy. These compressed models provide improved applicability to “on device” or other resource-constrained scenarios. | 2020-04-30 |
20200134467 | Sharing preprocessing, computations, and hardware resources between multiple neural networks - A method for training a Neural-Network (NN), the method includes receiving a plurality of NN training tasks, each training task including a respective preprocessing phase that preprocesses data to be provided as input data to the NN, and (ii) a respective computation phase that trains the NN using the preprocessed data. The plurality of NN training tasks is executed, including: (a) a commonality is identified between the input data required by computation phases of two or more of the training tasks, and (b) in response to identifying the commonality, one or more preprocessing phases are executed that produce the input data jointly for the two or more training tasks. | 2020-04-30 |
20200134468 | SYSTEM AND METHOD FOR MAX-MARGIN ADVERSARIAL TRAINING - A system for generating an adversarial example in respect of a neural network, the adversarial example generated to improve a margin defined as a distance from a data example to a neural network decision boundary. The system includes a data receiver configured to receive one or more data sets including at least one data set representing a benign training example (x); an adversarial generator engine configured to: generate, using the neural network, a first adversarial example (Adv1) having a perturbation length epsilon1 against x; conduct a search in a direction (Adv1-x) using the neural network; and to generate, using the neural network, a second adversarial example (Adv2) having a perturbation length epsilon2 based at least on an output of a search in the direction (Adv1-x). | 2020-04-30 |
20200134469 | METHOD AND APPARATUS FOR DETERMINING A BASE MODEL FOR TRANSFER LEARNING - Methods and apparatuses for accurately determining a model, which is to be the basis of transfer learning, among a plurality of source models, are provided. According to an embodiment, an apparatus for determining a base model to be used for transfer learning to a target domain is provided. The apparatus comprises a memory which comprises one or more instructions and a processor which executes the instructions to construct a neural network model for measuring suitability of a plurality of pre-trained source models, measure the suitability of each of the source models by inputting data of the target domain to the neural network model, and determine the base model to be used for the transfer learning among the source models based on the suitability. | 2020-04-30 |
20200134470 | COMPRESSED RECURRENT NEURAL NETWORK MODELS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for implementing long-short term memory layers with compressed gating functions. One of the systems includes a first long short-term memory (LSTM) layer, wherein the first LSTM layer is configured to, for each of the plurality of time steps, generate a new layer state and a new layer output by applying a plurality of gates to a current layer input, a current layer state, and a current layer output, each of the plurality of gates being configured to, for each of the plurality of time steps, generate a respective intermediate gate output vector by multiplying a gate input vector and a gate parameter matrix. The gate parameter matrix for at least one of the plurality of gates is a structured matrix or is defined by a compressed parameter matrix and a projection matrix. | 2020-04-30 |
20200134471 | Method for Generating Neural Network and Electronic Device - Disclosed are a method for generating a neural network, an apparatus thereof, and an electronic device. The method includes: obtaining an optimal neural network and a worst neural network from a neural network framework by using an evolutionary algorithm; obtaining an optimized neural network from the optimal neural network by using a reinforcement learning algorithm; updating the neural network framework by adding the optimized neural network into the neural network framework and deleting the worst neural network from the neural network framework; and determining an ultimately generated neural network from the updated neural network framework. In this way, a neural network is optimized and updated from a neural network framework by combining the evolutionary algorithm and the reinforcement learning algorithm, thereby automatically generating a neural network structure rapidly and stably. | 2020-04-30 |
20200134472 | SYSTEM AND METHOD FOR OPTIMIZATION OF DEEP LEARNING MODEL - Provided are an optimization system and method of a deep learning model. According to example embodiments, by optimizing the structure of the deep learning model appropriately for a target dataset without fixing the structure of the deep learning model, it is possible to generate a model structure capable of having high performance on the target dataset and also saving resources. | 2020-04-30 |
20200134473 | DATA DISCRIMINATOR TRAINING METHOD, DATA DISCRIMINATOR TRAINING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND TRAINING METHOD - A model generation method includes updating, by at least one processor, a weight matrix of a first neural network model at least based on a first inference result obtained by inputting, to the first neural network model which discriminates between first data and second data generated by using a second neural network model, the first data, a second inference result obtained by inputting the second data to the first neural network model, and a singular value based on the weight matrix of the first neural network model. The model generation method also includes at least based on the second inference result, updating a parameter of the second neural network model. | 2020-04-30 |
20200134474 | Efficient Value Lookup In A Set Of Scalar Intervals - In one aspect, a computer implemented method for efficient value lookup in a set of scalar intervals is provided. The method includes determining, in response to a query for a scalar value, that the scalar value is located in a set of scalar intervals, wherein each of the scalar intervals comprises a left bound and a right bound. The method further includes sorting the scalar intervals based on left bounds. The method further includes comparing, in response to the sorting, a pair of scalar intervals to determine if the pair of scalar intervals overlaps. The method further includes identifying, based on the comparing indicating that the pair overlaps, a method of processing the scalar intervals. | 2020-04-30 |
20200134475 | CONSTRAINING FUNCTION APPROXIMATION HARDWARE INTEGRATED WITH FIXED-POINT TO FLOATING-POINT CONVERSION - A method of constraining data represented in a deep neural network is described. The method includes determining an initial shifting specified to convert a fixed-point input value to a floating-point output value. The method also includes determining an additional shifting specified to constrain a dynamic range during converting of the fixed-point input value to the floating-point output value. The method further includes performing both the initial shifting and the additional shifting together to form a dynamic, range constrained, normalized floating-point output value. | 2020-04-30 |
20200134476 | GENERATING CODE PERFORMANCE HINTS USING SOURCE CODE COVERAGE ANALYTICS, INSPECTION, AND UNSTRUCTURED PROGRAMMING DOCUMENTS - An illustrative embodiment includes a method for improving performance of a computer. The method includes: automatically identifying an algorithm supplied by a user for execution on the computer; searching a database of algorithms for at least one algorithm similar to the user-supplied algorithm; determining whether the at least one similar algorithm will improve performance of the computer relative to the user-supplied algorithm; and if the at least one similar algorithm will improve performance of the computer relative to the user-supplied algorithm, modifying the user-supplied algorithm to incorporate at least in part the at least one similar algorithm. | 2020-04-30 |
20200134477 | SYSTEM AND METHOD OF INTEGRATING DATABASES BASED ON KNOWLEDGE GRAPH - An artificial intelligence (AI) system that utilizes a machine learning algorithm, such as deep learning, etc. and an application of the AI system is provided. A method, performed by a server, of integrating and managing a plurality of databases (DBs) includes obtaining a plurality of knowledge graphs related to DBs generated from the plurality of DBs having different structures from one another, inputting the plurality of knowledge graphs related to DBs into a learning model related to DB for determining a correlation between data in the plurality of DBs, and obtaining a virtual integrated knowledge graph output from the learning model related to DB and including information about a correlation extracted from the plurality of knowledge graphs related to DBs. | 2020-04-30 |
20200134478 | WEIGHT-COEFFICIENT-BASED HYBRID INFORMATION RECOMMENDATION - Historical behavioral information of a user is retrieved, where the historical behavioral data includes data associated to operations performed by the user on a server. Recommended information sets are determined based on the historical behavioral information. A plurality of weight coefficients are generated for the plurality of recommended information sets. A recommendation list is determined based on the plurality of weight coefficients. It is determined whether the recommendation list satisfies a recommendation condition. If the recommendation list satisfies the recommendation condition, a recommendation based on the recommendation list is transmitted to the user device. | 2020-04-30 |
20200134479 | FINE-GRAINED FORECAST DATA MANAGEMENT - Systems, methods and computer program products for forecast data storage. Embodiments implement fine-grained forecast data management. A cloud-based object storage system capable of storing multiple versions of an object in a container is identified. A forecast data set covering a relatively longer time period (e.g., years) is partitioned into fine-grained forecast data items corresponding to relatively shorter forecast data time periods (e.g., months, days). Some of the fine-grained forecast data items corresponding to the relatively shorter forecast data time periods are stored into a first portion of metadata of the container rather than storing the forecast data items into the object itself. Updated variations of the fine-grained forecast data items and/or new forecast data items are stored in versions of the object. A second portion of metadata of the container is used to describe a version mapping between the forecast data time periods and corresponding object versions in the container. | 2020-04-30 |
20200134480 | APPARATUS AND METHOD FOR DETECTING IMPACT FACTOR FOR AN OPERATING ENVIRONMENT - An apparatus and method for detecting impact factors for an operating environment. The apparatus generates a detection result for each of the first factors of a plurality of first historical records by analyzing a dissimilarity degree of the plurality of first data corresponding to each first factor. Each detection result is a continuous data type or a discrete data type. The apparatus trains a data type recognition model according to the first historical records and the detection results. The apparatus establishes a basic prediction model by a training set of a plurality of second historical records, generates a comparison set by rearranging the second data corresponding to a specific factor in the training set, establishes a comparison prediction model by the comparison set, and determines a degree of importance of the specific factor by comparing the accuracies of the basic prediction model and the comparison prediction model. | 2020-04-30 |
20200134481 | REMEDYING DEFECTIVE KNOWLEDGE OF A KNOWLEDGE DATABASE - A method includes detecting a defective entigen group within a knowledge database. The defective entigen group includes entigens and one or more entigen relationships between at least some of the entigens. The defective entigen group represents knowledge of a topic. The method further includes obtaining corrective content for the topic based on the defective entigen group and generating a corrective entigen group based on the corrective content. The method further includes updating the defective entigen group utilizing the corrective entigen group to produce a curated entigen group. | 2020-04-30 |
20200134482 | SCALING OVERRIDES IN A RULES ENGINE USING A STREAMING PROBABILISTIC DATA STRUCTURE - The system can include a rules engine and one or more application systems. The rules engine can be configured to perform receiving overrides, storing the overrides in an overrides repository, generating a bloom filter using the overrides, and sending the bloom filter to the one or more application systems. The one or more application systems can be configured to perform storing the bloom filter as a cached bloom filter, receiving a request to evaluate rules and check for the overrides, and determining, using the cached bloom filter, whether to apply any of the overrides to the request. | 2020-04-30 |
20200134483 | METHOD FOR CONFIGURING A MATCHING COMPONENT - The present disclosure relates to a method for enabling data integration. The method comprises collecting matching results of matching of records by a matching component over a time window. The number of false tasks of user defined tasks and system defined tasks in the collected matching results may be determined. The matching criterion used by the matching component may be adjusted to minimize the number of user defined tasks while the fraction of false tasks stays within a certain limit. The matching criterion may be replaced by the adjusted matching criterion for further usage of the matching component. | 2020-04-30 |
20200134484 | Clustering, Explainability, and Automated Decisions in Computer-Based Reasoning Systems - The techniques herein include using an input context to determine a suggested action and/or cluster. Explanations may also be determined and returned along with the suggested action. The explanations may include (i) one or more most similar cases to the suggested case (e.g., the case associated with the suggested action) and, optionally, a conviction score for each nearby cases; (ii) action probabilities, (iii) excluding cases and distances, (iv) archetype and/or counterfactual cases for the suggested action; (v) feature residuals; (vi) regional model complexity; (vii) fractional dimensionality; (viii) prediction conviction; (ix) feature prediction contribution; and/or other measures such as the ones discussed herein, including certainty. The explanation data may be used to determine whether to perform a suggested action. | 2020-04-30 |
20200134485 | USING MACHINE LEARNING-BASED SEED HARVEST MOISTURE PREDICTIONS TO IMPROVE A COMPUTER-ASSISTED AGRICULTURAL FARM OPERATION - Embodiments generate digital plans for agricultural fields. In an embodiment, a model receives digital inputs including stress risk data, product maturity data, field location data, planting date data, and/or harvest date data. The model mathematically correlates sets of digital inputs with threshold data associated with the stress risk data. The model is used to generate stress risk prediction data for a set of product maturity and field location combinations. In a digital plan, product maturity data or planting date data or harvest date data or field location data can be adjusted based on the stress risk prediction data. A digital plan can be transmitted to a field manager computing device. An agricultural apparatus can be moved in response to a digital plan. | 2020-04-30 |
20200134486 | LEVERAGING GENETICS AND FEATURE ENGINEERING TO BOOST PLACEMENT PREDICTABILITY FOR SEED PRODUCT SELECTION AND RECOMMENDATION BY FIELD - An example computer-implemented method includes receiving agricultural data records comprising a first set of yield properties for a first set of seeds grown in a first set of environments, and receiving genetic feature data related to a second set of seeds. The method further includes generating a second set of yield properties for the second set of seeds associated with a second set of environments by applying a model using the genetic feature data and the agricultural data records. In addition, the method includes determining predicted yield performance for a third set of seeds associated with one or more target environments by applying the second set of yield properties, and generating seed recommendations for the one or more target environments based on the predicted yield performance for the third set of seeds. In the present example, the method also includes causing display, on a display device communicatively coupled to the server computer system, the seed recommendations. | 2020-04-30 |
20200134487 | APPARATUS AND METHOD FOR PREPROCESSING SECURITY LOG - According to one embodiment, An apparatus for preprocessing a security log includes a field divider configured to divide a character string of a security log into a plurality of fields on the basis of a structure of the security log, an ASCII code converter configured to convert a character string included in each of the plurality of divided fields into ASCII codes, and a vector data generator configured to generate vector data for each of the plurality of divided fields using the converted ASCII codes. | 2020-04-30 |
20200134488 | METHOD FOR RECOMMENDING NEXT USER INPUT USING PATTERN ANALYSIS OF USER INPUT - Methods for recommending next user input using pattern analysis of user input is provided. According to an aspect of the present disclosure, a method comprising obtaining information on a series of user inputs entered through a graphic user interface (GUI), analyzing the information on the series of user inputs to identify a pattern formed by the series of user inputs, and when the pattern is identified, automatically displaying next input recommendation information determined depending on the identified pattern and a last user input of the series of user inputs without additional user input after the series of user inputs, is provided. | 2020-04-30 |
20200134489 | Systems for Second-Order Predictive Data Analytics, And Related Methods and Apparatus - A predictive modeling method may include obtaining a fitted, first-order predictive model configured to predict values of output variables based on values of first input variables; and performing a second-order modeling procedure on the fitted, first-order model, which may include: generating input data including observations including observed values of second input variables and predicted values of the output variables; generating training data and testing data from the input data; generating a fitted second-order model of the fitted first-order model by fitting a second-order model to the training data; and testing the fitted, second-order model of the first-order model on the testing data. Each observation of the input data may be generated by (1) obtaining observed values of the second input variables, and (2) applying the first-order predictive model to corresponding observed values of the first input variables to generate the predicted values of the output variables. | 2020-04-30 |
20200134490 | RECOMMENDATION SYSTEM CONSTRUCTION METHOD AND APPARATUS - A client device determines a local user gradient value based on a current user preference vector and a local item gradient value based on a current item feature vector. The client device updates a user preference vector by using the local user gradient value and updates an item feature vector by using the local item gradient value. The client device determines a neighboring client device based on a predetermined adjacency relationship. The local item gradient value is sent by the client device to the neighboring client device. The client device receives a neighboring item gradient value sent by the neighboring client device. The client device updates the item feature vector by using the neighboring item gradient value. In response to the client device determining that a predetermined iteration stop condition is satisfied, the client device outputs the user preference vector and the item feature vector. | 2020-04-30 |
20200134491 | Swarm System Including an Operator Control Section Enabling Operator Input of Mission Objectives and Responses to Advice Requests from a Heterogeneous Multi-Agent Population Including Information Fusion, Control Diffusion, and Operator Infusion Agents that Controls Platforms, Effectors, and Sensors - Systems and methods are provided in relation to a complex adaptive command guided swarm system that includes an operator section comprising a first command and control section and a plurality of networked swarm of semi-autonomously agent controlled system of systems platforms (SAASoSP). The first command and control section includes a user interface, computer system, network interface, a plurality of command and control systems executed or running on the computer system. The plurality of networked SAASoSPs include embodiments comprising a second control section, sensors, network/communication section, and equipment, wherein the second control section includes an artificial intelligence, integrated information fusion/control diffusion (AIIIFCD) enabled control sections comprising pattern recognition, a machine learning system with stored pattern identification and new pattern identification and storage system, the second control section further comprising a system adapted to receive a range of operator inputs including at least a range of high level mission objective data to actual control inputs to one or more elements of one or more of the SAASOSP. | 2020-04-30 |
20200134492 | SEMANTIC INFERENCING IN CUSTOMER RELATIONSHIP MANAGEMENT - Customer relationship management (“CRM”) implemented in a computer system, including parsing, by a parsing engine of the computer system into parsed triples of a description logic, words of a CRM event from an incoming stream of CRM events, the CRM event characterized by an event type, the stream implemented in a CRM application of the computer system; and inferring, by an inference engine from the parsed triples according to inference rules specific to the event type, inferred triples. | 2020-04-30 |
20200134493 | AUTOMATIC CORRECTION OF INDIRECT BIAS IN MACHINE LEARNING MODELS - Systems and methods for detecting indirect bias in machine learning models are provided. A computer-implemented method includes: receiving, by a computer device, a user request to detect transitive bias in a machine learning model; determining, by the computer device, correlations of attributes of neighboring data not included in a dataset of the machine learning model; ranking, by the computer device, the attributes based on the determined correlations; and returning, by the computer device, a list of the ranked attributes to a user that generated the user request. | 2020-04-30 |
20200134494 | Systems and Methods for Generating Artificial Scenarios for an Autonomous Vehicle - Systems and methods for vehicle simulation are provided. A method can include obtaining generator input data indicative of one or more parameter values, and inputting the generator input data into a machine-learned generator model that is configured to generate artificial data based at least in part on the generator input data. The artificial data can include data representing an artificial scenario associated with an autonomous vehicle. The method can include obtaining an output of the machine-learned generator model that can include the artificial data, and inputting the artificial data into a machine-learned discriminator model to generate authenticity data representing an authenticity associated with the artificial scenario of the artificial data. The method can include obtaining an output of the machine-learned discriminator model that can include the authenticity data. The method can include selecting the artificial scenario in the artificial data. | 2020-04-30 |
20200134495 | ONLINE LEARNING OF MODEL PARAMETERS - Online learning of model parameters is performed by obtaining a first target value in a target sequence and a feature vector corresponding to the first target value. The feature vector includes a plurality of elements. The feature vector can be modified to obtain a modified feature vector by reducing an absolute value of at least one element of the feature vector. An inverse Hessian matrix can be generated recursively from a previous inverse Hessian matrix using at least the feature vector and the modified feature vector. Parameters of a model can be updated using the inverse Hessian matrix. | 2020-04-30 |
20200134496 | CLASSIFYING PARTS VIA MACHINE LEARNING - Example implementations relate to classifying parts. A computing device may comprise a processing resource; and a memory resource storing non-transitory machine-readable instructions to cause the processing resource to: receive a part description of a part; classify the part by determining a commodity of the part based on the part description using machine learning; and update attributes of the part based on the determined commodity of the classified part. | 2020-04-30 |
20200134497 | PROBABILISTIC FRAMEWORK FOR DETERMINING DEVICE ASSOCIATIONS - Methods, systems, and devices for determining device associations are described. Some database systems may store information related to device characteristics. Each of these devices may be operated by one or more users, and each user may operate one or more devices. In some cases, information about users may be more valuable than information about devices. As such, a system may determine probable associations between devices, where an association can correspond to operation by a same user. To determine device associations, the system may perform a machine-learning process (e.g., using probabilistic soft logic (PSL) and a hinge-loss Markov Random Field (HL-MRF) model) on input device characteristics and connection information to generate a probability density function. The probability density function may indicate associations between devices within the system. Based on one or more thresholds, the system may determine sets of associated devices and may transmit this association information for analysis or display. | 2020-04-30 |
20200134498 | DYNAMIC BOLTZMANN MACHINE FOR PREDICTING GENERAL DISTRIBUTIONS OF TIME SERIES DATASETS - A computer-implemented method includes employing a dynamic Boltzmann machine (DyBM) to solve a maximum likelihood of generalized normal distribution (GND) of time-series datasets. The method further includes acquiring the time-series datasets transmitted from a source node to a destination node of a neural network including a plurality of nodes, learning, by the processor, a time-series generative model based on the GND with eligibility traces, and, performing, by the processor, online updating of internal parameters of the GND based on a gradient update to predict updated times-series datasets generated from non-Gaussian distributions. | 2020-04-30 |
20200134499 | METHOD AND APPARATUS FOR STOCHASTIC INFERENCE BETWEEN MULTIPLE RANDOM VARIABLES VIA COMMON REPRESENTATION - A method and system are herein disclosed. The method includes developing a joint latent variable model having a first variable, a second variable, and a joint latent variable representing common information between the first and second variables, generating a variational posterior of the joint latent variable model, training the variational posterior, and performing inference of the first variable from the second variable based on the variational posterior. | 2020-04-30 |
20200134500 | Devices and Methods for Efficient Execution of Rules Using Pre-Compiled Directed Acyclic Graphs - In one aspect, a computer implemented method for translating and executing rules using a directed acyclic graph is provided. The method includes transforming a ruleset into a directed acyclic graph. The directed acyclic graph includes a plurality of nodes and a plurality of branches. The method further includes identifying similarities across the plurality of branches. The method further includes grouping branches of the directed acyclic graph based on the identified similarities. The method further includes creating a modified directed acyclic graph based on the grouping. The method further includes selecting and using a method of processing a group of the modified directed acyclic graph based on an aspect of the group. | 2020-04-30 |
20200134501 | APPROXIMATE GATE AND SUPERCONTROLLED UNITARY GATE DECOMPOSITIONS FOR TWO-QUBIT OPERATIONS - Techniques are provided for improving quantum circuits. The technology includes approximately expanding, by a system operatively coupled to a processor, using zero to a number of applications of a super controlled basis gate, a target two-qubit operation, with the approximately expanding resulting in instances of the target two-qubit operation corresponding to the zero to the number of applications, and the target two-qubit operation is part of a source quantum circuit associated with a quantum computer. The system analyzes the instances and the super controlled basis gate, and automatically rewrites the source quantum circuit into a deployed quantum circuit based on the analyzing. | 2020-04-30 |
20200134502 | Hybrid Quantum-Classical Computer System for Implementing and Optimizing Quantum Boltzmann Machines - A hybrid quantum-classical (HQC) computer prepares a quantum Boltzmann machine (QBM) in a pure state. The state is evolved in time according to a chaotic, tunable quantum Hamiltonian. The pure state locally approximates a (potentially highly correlated) quantum thermal state at a known temperature. With the chaotic quantum Hamiltonian, a quantum quench can be performed to locally sample observables in quantum thermal states. With the samples, an inverse temperature of the QBM can be approximated, as needed for determining the correct sign and magnitude of the gradient of a loss function of the QBM. | 2020-04-30 |
20200134503 | QUANTUM COMPUTING SYSTEM AND METHOD - In various embodiments, A quantum computing system comprises at least one classical processor configured by operational instructions stored in a classical memory to perform operations including: generating a qubit encoding from an input, wherein the qubit encoding indicates a sublattice of a projective Riemann-hypersphere that represents a plurality of qubits; and generating a quantum solution based on quantum calculations wherein the quantum calculations include a decomposition of the sublattice performed via a plurality of iterations. | 2020-04-30 |
20200134504 | SYSTEM AND METHOD OF TRAINING BEHAVIOR LABELING MODEL - A system of training behavior labeling model is provided. Specifically, a processing unit inputs each data of a training data set into a plurality of learning modules to establish a plurality of labeling models. The processing unit obtains a plurality of second labeling information corresponding to each data of a verification data set and generates a behavior labeling result according to the second labeling information corresponding to each data of the verification data set. The processing unit obtains a labeling change value according to the behavior labeling result and first labeling information corresponding to each data of the verification data set. The processing unit, if determining that the labeling change value is greater than a change threshold, updates the first labeling information according to the behavior labeling results, exchanges the training data set and the verification data set and reestablishes the labeling models. | 2020-04-30 |
20200134505 | METHOD OF UPDATING POLICY FOR CONTROLLING ACTION OF ROBOT AND ELECTRONIC DEVICE PERFORMING THE METHOD - A tendency of an action of a robot may vary based on learning data used for training. The learning data may be generated by an agent performing an identical or similar task to a task of the robot. An apparatus and method for updating a policy for controlling an action of a robot may update the policy of the robot using a plurality of learning data sets generated by a plurality of heterogeneous agents, such that the robot may appropriately act even in an unpredicted environment. | 2020-04-30 |
20200134506 | MODEL TRAINING METHOD, DATA IDENTIFICATION METHOD AND DATA IDENTIFICATION DEVICE - A method of training a student model corresponding to a teacher model is provided. The teacher model is obtained through training by taking first input data as input data and taking a corresponding output data as an output target. The method comprises training the student model by taking second input data as input data and taking the corresponding output data as an output target. The second input data is data obtained due to changing of the first input data. | 2020-04-30 |
20200134507 | DISTRIBUTION SYSTEM, DATA MANAGEMENT APPARATUS, DATA MANAGEMENT METHOD, AND COMPUTER-READABLE RECORDING MEDIUM - A distribution system | 2020-04-30 |
20200134508 | METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR DEEP LEARNING - A method, device and computer program product for deep learning are provided. According to one example, a parameter related to a deep learning model for a training dataset allocated to a server is obtained at a client; a transmission state of the parameter is determined, the transmission state indicating whether the parameter has been transmitted to the server; and information associated with the parameter to be sent to the server is determined based on the transmission state to update the deep learning model. Therefore, the performance of deep learning may be improved, and the network load of deep leaning may be reduced. | 2020-04-30 |
20200134509 | LEARNING MANAGEMENT SYSTEM AND LEARNING MANAGEMENT METHOD - A retail agent determines a parameter of an activity proposal model by using data stored in a past record database, and determines parameters of an activity determination model and an activity value evaluation model by further using base activity simulation data. Consequently, it is possible to appropriately determine parameters of a subsystem control method in a complex system which cannot be embodied as a simulator and shows a significant change with respect to past record data. | 2020-04-30 |
20200134510 | ITERATIVE CLUSTERING FOR MACHINE LEARNING MODEL BUILDING - A method includes performing a first clustering operation to group members of a first data set into a first group of clusters and associating each cluster of the first group of clusters with a corresponding label of a first group of labels. The method includes performing a second clustering operation to group members of a combined data set into a second group of clusters. The combined data set includes a second data set and at least a portion of the first data set. The method includes associating one or more clusters of the second group of clusters with a corresponding label of the first group of labels and generating training data based on a second group of labels and the combined data set. The method includes training a machine learning classifier based on the training data to provide labels to a third data set. | 2020-04-30 |
20200134511 | SYSTEMS AND METHODS FOR IDENTIFYING DOCUMENTS WITH TOPIC VECTORS - One or more embodiments are directed to identifying documents with topic vectors by training a machine learning model with a training documents generated from text collections, receiving, after generating a list of topic vectors for the plurality of text collections, an additional text collection, and generating an additional topic vector for the additional text collection without training the machine learning model on the additional text collection. One or more embodiments further include updating the list of topic vectors with additional topic vectors that includes the additional topic vector, receiving a first topic vector based on a first text collection generated in response to user interaction, and matching the first topic vector to the additional topic vector. One or more embodiments further include presenting a link corresponding to the additional text collection in response to matching the first topic vector to the additional topic vector. | 2020-04-30 |
20200134512 | KNOWLEDGE BASE CONTENT DISCOVERY - Rapid knowledge base discovery techniques. A database describes associations between knowledge base articles and closed problem cases. In periodic batch operations, the associations are used to generate solution probability predictors, each of which predictor corresponds to a probability that a particular knowledge base article was used to resolve a particular problem or case. The solution probability predictors comprise probability predictor parameter values associated with the set of words that occur in closed customer problem cases. A specialized data structure is populated with the probability predictor parameter values. When a new active customer problem case is opened, a set of words pertaining to the new, active case is constructed. The active case words are used with the specialized data structure to generate a probability value for each of a set of knowledge base articles. The knowledge base articles having the highest probability values are identified and presented in an ordered list. | 2020-04-30 |
20200134513 | ELIGIBILITY PREDICTIONS FOR INSTANT BOOKING IN AN ONLINE MARKETPLACE - Systems and methods are provided for receiving a request for services in a given location from a client device operated by a user and generating a set of features based on information included in the request for services in the given location. The systems and methods further provide for analyzing the set of features using a machine learning model to predict whether only services that can be instantly booked should be provided in response to the request for services in the given location, analyzing a prediction output by the machine learning model to determine that only services that can be instantly booked should be provided in response to the request for services in the given location, and generating a list with only services that can be instantly booked. | 2020-04-30 |
20200134514 | BOOKED-BLOCKED CLASSIFIER - Provided are a system and method for determining whether an apparent booking is a genuine booking or is a blocked period of unavailability that is not the result of a genuine booking. Bookings occur in all sorts of industries, such as travel, medical, entertainment, weddings, catering, and the like. In some examples, the method may include receiving content from a website that includes a listing for an object, identifying a period of unavailability of the object based on the content received from the website, predicting, via a machine learning model, whether the period of unavailability of the object is a blocked period that is not a result of a reservation of the object, the predicting being performed based on additional content visible on the website being input into the machine learning model, and storing an identifier of the period of unavailability and information about the prediction within a storage device. | 2020-04-30 |
20200134515 | METHOD AND SYSTEM FOR CARGO MANAGEMENT - A cargo management system for handling cargo transportation is disclosed. The management system can be configured to obtain information regarding one or more alternative routes of a journey of a cargo-shipping-unit (CSU). The system comprises an operational recommending unit (ORU), which is configured to obtain one or more define targets that are related to the type of cargo that is carried by the CSU and to present the impact of each alternative route on each of the define targets. | 2020-04-30 |
20200134516 | METHOD FOR ASSET MANAGEMENT OF ELECTRIC POWER EQUIPMENT - A method for asset management of power equipment compensates a reference reliability model by each sub-device of the power equipment and generate a unique reliability model by the each sub-device by comparing reliability of the reference reliability model by the each sub-device and the health index by the each sub-device; assesses priorities based on equipment sensitivity and establishes a maintenance strategy while analyzing reliability by substation system reliability and reliability by economic value; calculates reliability of the power equipment by applying a system relationship model between the power equipment and the each sub-device to which specific weight and failure rate are reflected; selects a maintenance scenario by a predetermined priority; and updates the reliability model for the power equipment and the unique reliability model by the each sub-device as a result of executing maintenance. | 2020-04-30 |
20200134517 | MOBILE DEVICE-BASED SYSTEM FOR AUTOMATED, REAL TIME HEALTH RECORD EXCHANGE - A method, an apparatus, and a computer program product for accessing electronic medical records are provided in which a portable computing device uniquely associated with a user authenticates an identification of the user and automatically retrieves information corresponding to the user from electronic healthcare records systems using the identification. The retrieved information may be combined with other information and electronically delivered to a healthcare provider. | 2020-04-30 |
20200134518 | INTER-APPLICATION WORKFLOW PERFORMANCE ANALYTICS - Methods, systems and computer program products for shared content management systems that provide performance analytics pertaining to a project. Embodiments include establishing one or more network communication links between a content management system that manages a plurality of shared content objects and a plurality of applications that cause modifications to the shared content objects in accordance with workflows of the project. Iteraction events that correspond to modifications over the shared content objects are recorded such that interaction events associated with the plurality of applications are selected based at least in part on attributes associated with the interaction events. Relationships between the recorded interaction events such as time durations between certain of the interaction events are calculated. Project performance measurements are generated based on the calculations and/or based on other relationships between the interaction events. The calculations may span across many different applications and/or many different departments and/or many different enterprises. | 2020-04-30 |
20200134519 | RISK MANAGEMENT SYSTEM - The present disclosure provides a risk management system that undertakes proactive risk assessment and suggests appropriate controls to mitigate risks. Moreover, the risk management system undertakes risk assessment based in three or more parameters. Further, the risk management system assesses strength or weakness of management components or departments. | 2020-04-30 |
20200134520 | METHOD AND SYSTEM FOR DYNAMICALLY IMPROVING THE PERFORMANCE OF SECURITY SCREENING - A system for security screening at a venue includes one or more security screening stations, each station having a security level and configured to screen a plurality of subjects in a queue for admitting the subject to the venue. The system may determine the current queue length of all the queues associated with security screening stations based on data received from one or more sensors. The system may also determine whether the throughput for at least one security screening station should be increased or decreased. If the system so determines, the system may decrease or increase, for a variation time period, the security level of the security screening station associated with the queue. The system may also adjust configuration of the stations or place a secondary and complementary screening process to dynamically improve the performance of the security system. | 2020-04-30 |
20200134521 | RISK ASSESSMENT USING POISSON SHELVES - Detecting fraudulent activity can be a complex, manual process. In this paper, we adapt statistical properties of count data in a novel algorithm to uncover records exhibiting high risk for fraud. Our method identifies shelves, partitioning data under the counts using a Student's t-distribution. We apply this methodology on a univariate dataset including cumulative results from phone calls to a customer service center. Additionally, we extend this technique to multivariate data, illustrating that the same method is applicable to both univariate and multivariate data. | 2020-04-30 |
20200134522 | METHOD AND SYSTEM FOR CARGO MANAGEMENT - A cargo management system for handling cargo transportation is disclosed. The management system can be configured to determine the risk that is involved in a journey of a cargo-shipping-unit (CSU). The system comprises a Risk-Analyzer-Unit (RAU). The RAU can be configured to obtain one or more features of the journey of that CSU. Next the RAU can fetch a predictive model that can predict the likelihood that a demand for loss (DFL) will be filed. In addition the RAU can be configured to present the likelihood that a demand DFL will be filed as the risk that is associated with that journey of that CSU. | 2020-04-30 |
20200134523 | SYSTEMS AND METHODS FOR DISTRIBUTED RISK ANALYSIS - Systems and methods for distributed risk analysis are discussed. | 2020-04-30 |
20200134524 | Titanium Task-Engine System - The present disclosure is related to computing devices, systems, and methods for a new task-engine system that connects to a variety of task-interaction providers, enabling a user to use any one of multiple task-interaction providers to create and complete tasks within a workflow. That is, the connection to a variety of task-interaction providers allows a user to interact with the workflow through any of the task-interaction providers and create and/or complete any number of tasks in the workflow. The task-engine system may also update the creation and/or completion of a workflow task in all other task-interaction providers, such that all users may be aware of, or notified of, the current state of the workflow through any of the task-interaction providers. | 2020-04-30 |
20200134525 | On-Demand Transport Selection Process Facilitating Third-Party Autonomous Vehicles - A network computing system can coordinate on-demand transport serviced by transport providers operating throughout a transport service region. The transport providers can comprise a set of internal autonomous vehicles (AVs) and a set of third-party AVs. The system can receive a transport request from a requesting user of the transport service region, where the transport request indicates a pick-up location and a destination. The system can determine a subset of the transport providers to service the respective transport request, and executing a selection process among the subset of the transport providers to select a transport provider to service the transport request. The system may then transmit a transport assignment to the selected transport provider to cause the selected transport provider to service the transport request. | 2020-04-30 |
20200134526 | RESOURCE PLANNING APPARATUS AND RESOURCE PLAN VISUALIZATION METHOD - A resource planning apparatus includes a resource expanding unit configured to expand processes necessary for producing a product and resource capabilities necessary for resources to execute the processes, a resource allocation scheme generating unit configured to allocate resources having the resource capabilities expanded by the resource expanding unit to the processes, and a production plan devising unit configured to devise a production plan of the product on the basis of the resources allocated to the processes. | 2020-04-30 |
20200134527 | METHOD FOR SETTING APPROVAL PROCEDURE BASED ON BASE FIELDS - A method for setting an approval process based on basis fields is disclosed in the present invention, including a step of creating an approval process: S1: selecting a form corresponding to the approval process; S2: selecting a basis field for the approval process, where one basis field can be selected by one or more approval processes; and S3: setting a field value set of the selected basis field of the approval process, wherein each field value can only exist in a field value set of one approval process under the basis field. During relation to the approval process, it is determined, according to the field value of the basis field in the approval form, the approval form belongs to which approval process's field value set of the corresponding basis field. In the present invention, when a form is submitted to be approved in an approval process, the form may be automatically related to an approval process according to a field value of a basis field in the form. The process is determined according to content of the basis field in the form, which is simple, clear, and easy to operate. The basis field in the form is changeable, so that different approval requirements in actual management can be met. | 2020-04-30 |
20200134528 | SYSTEMS AND METHODS FOR COORDINATING ESCALATION POLICY ACTIVATION - A production environment monitoring system notes when problems or issues in a production environment have occurred. A noted problem can trigger multiple escalation policies, each of which calls for notifying a different individual to ask the individual to solve the problem. In systems and methods embodying the invention, a escalation policy coordinator selectively activates only one or more of the triggered escalation policies to prevent duplication or effort and to prevent the actions of one notified individual from interfering with the actions of a second individual. | 2020-04-30 |
20200134529 | INTELLIGENT PRIMING OF MOBILE DEVICE FOR FIELD SERVICE - Systems and methods are provided for intelligent priming of mobile devices for the representatives of an organization who make service calls or client visits. With intelligent priming, in some embodiments, the systems and methods analyze each potential service appointment or customer/client visit separately, identify a potential set of information most relevant for the appointment or visit, and download the same to the representatives' mobile device. This reduces the time required and amount of information for priming. In some embodiments, the intelligent priming systems and methods can also notify representatives of when they should prime or download, generate recommendations for the information to be primed or downloaded, and provide an indicator to the representative regarding the status of priming or download. | 2020-04-30 |
20200134530 | METHOD, SYSTEM AND APPARATUS FOR SUPPLY CHAIN EVENT REPORTING - A system for recording events on a distributed ledger includes: a server; a terminal including a terminal processor executing an OS; and a data capture device including a housing, a data capture assembly configured to capture product data, a communication interface, a memory including a first driver and/or a firmware, and a processor executing instructions in the memory. The instructions include instructions to transmit the product data to the terminal through the communication interface. The terminal includes a second driver enabling the OS to communicate with the data capture device to accept the product data. At least one of the first and second drivers and the firmware includes a transmission flag changeable between activated and deactivated states. The activated state causes the product data to be transmitted to the server. Upon the product data satisfying a recordation condition, the product data is recorded, from the server, to the distributed ledger. | 2020-04-30 |
20200134531 | METHOD AND SYSTEM FOR PREDICTING OCCUPANCY OF A FACILITY - The present disclosure provides a system to predict the occupancy of a facility. The system executes instructions to causes one or more processors to perform a method. The method includes a first step of collecting a first set of data associated with the occupancy of the facility in past. In addition, the method includes a second step of receiving a second set of data associated with the occupancy of the facility in a plurality of past seasons. Further, the method includes a third step of obtaining a third set of data associated with the demand of one or more users for the rooms of the facility. Furthermore, the method includes a fourth step of predicting the occupancy of the facility after applying machine learning algorithms over the first set of data, the second set of data and the third set of data. | 2020-04-30 |
20200134532 | Intelligent Virtual Agent For Managing Customer Communication And Workflows - The disclosed intelligent virtual agent provides a single, unified platform for completing a variety of diverse complex business transactions including a series of interactions between multiple people and the performance of multiple tasks occurring at different points in time. | 2020-04-30 |
20200134533 | Processing Order Experience Data Across Multiple Data Structures - Methods, apparatus, and processor-readable storage media for processing order experience data across multiple data structures are provided herein. An example computer-implemented method includes processing data, obtained from a first set of data structures, pertaining to orders placed with an enterprise, wherein the first set of data structures contains data associated with distinct portions of order transactions; extracting information pertaining to pre-defined attributes from the processed data and processing the extracted information into a second set of data structures; calculating order experience scores for the orders by applying at least one algorithm to the extracted information in the second set of data structures; generating at least one benchmark order experience value, wherein each benchmark order experience value is based at least in part on the calculated order experience scores; and performing operations related to order experience within the enterprise based at least in part on the benchmark order experience values. | 2020-04-30 |
20200134534 | METHOD AND SYSTEM FOR DYNAMICALLY AVOIDING INFORMATION TECHNOLOGY OPERATIONAL INCIDENTS IN A BUSINESS PROCESS - This disclosure relates to method and system for dynamically avoiding information technology (IT) operational incidents in a business process. The method may include mapping real-time unprocessed operational data with respect to an IT transaction in the business process against a dynamic baseline for each of a set of relevant key performance indicators (KPI's) for the IT transaction, and dynamically detecting an anomaly in the real-time unprocessed operational data based on the mapping. The method may further include determining, at an about time of the anomaly, one or more contemporaneous anomalies in real-time unprocessed application data and real-time unprocessed infrastructure data with respect to a plurality of applications and a plurality of components of an IT infrastructure respectively. The method may further include dynamically identifying a root cause of the anomaly by correlating the anomaly and the one or more contemporaneous anomalies, and providing the root cause for redressal. | 2020-04-30 |
20200134535 | A COMPUTER IMPLEMENTED APPRAISAL SYSTEM AND METHOD THEREOF - The present disclosure relates to a computer implemented appraisal system ( | 2020-04-30 |
20200134536 | AN AUTOMATED SYSTEM FOR PROVIDING PERSONALIZED REWARDS AND A METHOD THEREOF - The present disclosure relates to the field of automated rewarding systems. The system comprises a plurality of set of user devices ( | 2020-04-30 |
20200134537 | SYSTEM AND METHOD FOR GENERATING EMPLOYMENT CANDIDATES - Systems for determining alternative job titles to use when querying online recruiting databases. Multiple computers are operatively interconnected for determining a set of alternative job titles from a set of job parameters, and using the alternative job titles when querying online job and/or candidate posting corpora. A method embodiment commences upon parsing a given job requisition to identify at least one job title string that corresponds to at least one job title. The job title is normalized in accordance with a set of normalization rules to form a normalized job title string. The normalized job title string is then used to access machine learning models, which are in turn used to classify the job title string and to produce alternative titles that correspond to the job title. The alternative titles are used for querying online job posting corpora to retrieve postings that match at least one of the alternative titles. | 2020-04-30 |
20200134538 | METHOD FOR GENERATING A SECURITY ROUTE - The present disclosure relates to a computer implemented method for generating a security route to be operated by a user, specifically created based on security tasks generated by a security system. The present disclosure also relates to a corresponding security system and a computer program product. | 2020-04-30 |