07th week of 2022 patent applcation highlights part 43 |
Patent application number | Title | Published |
20220051049 | USING META-LEARNING TO OPTIMIZE AUTOMATIC SELECTION OF MACHINE LEARNING PIPELINES - A computer automatically selects a machine learning model pipeline using a meta-learning machine learning model. The computer receives ground truth data and pipeline preference metadata. The computer determines a group of pipelines appropriate for the ground truth data, and each of the pipelines includes an algorithm. The pipelines may include data preprocessing routines. The computer generates hyperparameter sets for the pipelines. The computer applies preprocessing routines to ground truth data to generate a group of preprocessed sets of said ground truth data and ranks hyperparameter set performance for each pipeline to establish a preferred set of hyperparameters for each of pipeline. The computer selects favored data features and applies each of the pipelines, with associated sets of preferred hyperparameters, to score the favored data features of the preprocessed ground truth data. The computer ranks pipeline performance and selects a candidate pipeline according to the ranking. | 2022-02-17 |
20220051050 | PRESENTATION OF DIGITIZED IMAGES FROM USER DRAWINGS - Methods, apparatuses, and non-transitory machine-readable media for presentation of digital images from user drawings are described. Apparatuses can include a display, a memory device, and a controller. In an example, a method can include the controller receiving data representing a user drawing, identifying a feature of the user drawing based on the data, and comparing the feature of the user drawing to features of a plurality of digitized images. In another example, a particular digitized image can be displayed based on the comparison of the feature with the features of the plurality of digitized images. | 2022-02-17 |
20220051051 | METHOD AND ASSISTANCE SYSTEM FOR PARAMETERIZING AN ANOMALY DETECTION METHOD - A method for parameterizing an anomaly detection method, which takes a multiplicity of sensor data points as a basis for performing a density-based cluster method, including a) mapping each sensor data point in a data space into a pixel data point in a pixel space, b) reproducing at least one operation of the density-based cluster method in the data space by means of at least one pixel operation in the pixel space, c) receiving at least one parameter value for each parameter of the density-based cluster method, d) applying the at least one pixel operation in accordance with the parameter values to the pixel data points e) outputting a cluster result in visual form in the pixel space, and f) providing the received parameter values for the anomaly detection method, and an assistance apparatus for parameterizing an anomaly detection apparatus that performs the anomaly detection method. | 2022-02-17 |
20220051052 | GROUND EXTRACTION METHOD FOR 3D POINT CLOUDS OF OUTDOOR SCENES BASED ON GAUSSIAN PROCESS REGRESSION - A ground extraction method for 3D point clouds of outdoor scenes based on Gaussian process regression, including: (1) obtaining the 3D point cloud of an outdoor scene, (2) building the neighborhood of the 3D point cloud, (3) calculating the covariance matrices and normal vectors of the 3D point cloud, (4) classifying the 3D point cloud according to its neighborhood shape, (5) extracting the initial ground G | 2022-02-17 |
20220051053 | Noise-Driven Coupled Dynamic Pattern Recognition Device for Low Power Applications - A pattern recognition device comprising: a coupled network of damped, nonlinear, dynamic elements configured to generate an output response in response to at least one environmental condition, wherein each element has an associated multi-stable potential energy function that defines multiple energy states of an individual element, and wherein the elements are tuned such that environmental noise triggers stochastic resonance between energy levels of at least two elements; a processor configured to monitor the output response over time and to determine a probability that the pattern recognition device is in a given state based on the monitored output response; and detecting a pattern in the at least one environmental condition based on the probability. | 2022-02-17 |
20220051054 | IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM RECORDING MEDIUM - An image processing device includes an extraction unit configured to extract a two-dimensional feature regarding a part of a person in an image, a conversion unit configured to convert the two-dimensional feature into a three-dimensional feature regarding a human body structure, and a training data generation unit configured to generate training data using the three-dimensional feature and a label indicating a physical state of the person. | 2022-02-17 |
20220051055 | TRAINING DATA GENERATION METHOD AND TRAINING DATA GENERATION DEVICE - A training data generation method includes: obtaining a camera image, a labeled image generated by adding annotation information to the camera image, and an object image showing an object to be detected by a learning model; identifying a specific region corresponding to the object based on the labeled image; and compositing the object image in the specific region on each of the camera image and the annotated image. | 2022-02-17 |
20220051056 | SEMANTIC SEGMENTATION NETWORK STRUCTURE GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM - This application provides a semantic segmentation network structure generation method performed by an electronic device, and a non-transitory computer-readable storage medium. The method includes: generating a corresponding architectural parameter for cells that form a super cell in a semantic segmentation network structure; optimizing the semantic segmentation network structure based on image samples, and removing a redundant cell from a super cell to which a target cell pertains, to obtain an improved semantic segmentation network structure; performing, by an aggregation cell in the improved semantic segmentation network structure, feature fusion on an output of the super cell; performing recognition processing on a fused feature map, to determine positions corresponding to objects that are in the image samples; and training the improved semantic segmentation network structure based on the positions corresponding to the objects that are in the image samples and annotations corresponding to the image samples, to obtain a trained semantic segmentation network structure. | 2022-02-17 |
20220051057 | AI-BASED, SEMI-SUPERVISED INTERACTIVE MAP ENRICHMENT FOR RADIO ACCESS NETWORK PLANNING - Aspects of the subject disclosure may include, for example, obtaining user input identifying a first user-identified network feature of a training image of a geographical region. The training image and the user-identified feature are provided to a neural network adapted to train itself according to the user-identified features to obtain a first trained result that classifies objects within the image according to the user-identified feature. The training image and the first trained result are displayed, and user-initiated feedback is obtained to determine whether a training requirement has been satisfied. If not satisfied, the user-initiated feedback is provided to the neural network, which retrains itself according to the feedback to obtain a second trained result that identifies an updated machine-recognized feature of the training image. The process is repeated until a training requirement has been satisfied, after which a map is annotated according to the machine-recognized feature. Other embodiments are disclosed. | 2022-02-17 |
20220051058 | UNMANNED DRIVING BEHAVIOR DECISION-MAKING AND MODEL TRAINING - This application provides a method and apparatus for unmanned driving behavior decision-making and model training, and an electronic device. The method includes: acquiring sample data, wherein the sample data includes a sample image; extracting a sample feature vector corresponding to the sample data, wherein a feature vector of the sample image is extracted by manifold dimension reduction; and based on the sample feature vector, training by semi-supervised learning to obtain a target decision-making model, wherein the target decision-making model is used for decision-making classification. | 2022-02-17 |
20220051059 | METHOD AND APPARATUS FOR TRAINING IMAGE RECOGNITION MODEL, AND IMAGE RECOGNITION METHOD AND APPARATUS - A method for training an image recognition model includes: obtaining training image sets; obtaining a first predicted probability, a second predicted probability, a third predicted probability, and a fourth predicted probability based on the training image sets by using an initial image recognition model; determining a target loss function according to the first predicted probability, the second predicted probability, the third predicted probability, and the fourth predicted probability; and training the initial image recognition model based on the target loss function, to obtain an image recognition model. | 2022-02-17 |
20220051060 | METHODS FOR CREATING PRIVACY-PROTECTING SYNTHETIC DATA LEVERAGING A CONSTRAINED GENERATIVE ENSEMBLE MODEL - Described herein are methods for generating and using a constrained ensemble of GANs. The constrained ensemble of GANs can be used to generate synthetic data that is (1) representative of the original data, and (2) not closely resembling the original data. An example method includes generating a constrained ensemble of GANs, where the constrained ensemble of GANs includes a plurality of ensemble members. The method also includes analyzing performance of the constrained ensemble of GANs by comparing a temporary performance metric to a baseline performance metric, and halting generation of the constrained ensemble of GANs in response to the analysis. The method also includes generating a synthetic dataset using the constrained ensemble of GANs. The synthetic dataset is sufficiently similar to the original dataset to permit data sharing for research purposes but alleviates privacy concerns due to differences in mutual information between synthetic and real data. | 2022-02-17 |
20220051061 | ARTIFICIAL INTELLIGENCE-BASED ACTION RECOGNITION METHOD AND RELATED APPARATUS - An artificial intelligence-based action recognition method includes: determining, according to video data comprising an interactive object, node sequence information corresponding to video frames in the video data, the node sequence information of each video frame including position information of nodes in a node sequence, the nodes in the node sequence being nodes of the interactive object that are moved to implement a corresponding interactive action; determining action categories corresponding to the video frames in the video data, including: determining, according to the node sequence information corresponding to N consecutive video frames in the video data, action categories respectively corresponding to the N consecutive video frames; and determining, according to the action categories corresponding to the video frames in the video data, a target interactive action made by the interactive object in the video data. | 2022-02-17 |
20220051062 | METHODS AND SYSTEMS FOR SCREENING IMAGES - A method of screening a continuous-tone image is configured to produce an output image to be printed on a surface. The continuous-tone image comprises a plurality of pixels having respective corresponding intended print locations. The method includes
| 2022-02-17 |
20220051063 | Payment Card with Removable Insert and Identification Elements - Aspects described herein may allow for a payment card assembly including a payment card having a first surface, an opposed second surface, and an aperture extending through the payment card from the first surface to the second surface. An insert may be removably received in the aperture. Each of a plurality of identification elements may be configured to be removably received in the aperture and have an identification characteristic different than an identification characteristic of each of the other identification elements. | 2022-02-17 |
20220051064 | METAL-DOPED EPOXY RESIN TRANSACTION CARD AND PROCESS FOR MANUFACTURE - A transaction card, and processes for the manufacture thereof, having a core layer, optionally, one or more layers or coatings over the core layer, and at least one of a magnetic stripe, a machine readable code, and a payment module chip disposed in or on the card and suitable for rendering the card operable for conducting a transaction. The core layer comprises a metal-doped cured epoxy comprised of metal particles distributed in a binder consisting essentially of a cured, polymerized epoxy resin, the core comprising greater than 50%, preferably greater than 75%, and more preferably greater than 90%, of the weight and/or volume of the card. In some embodiments, the core includes a metal insert enveloped with the metal-doped curable epoxy, wherein the periphery of the epoxy extends beyond the periphery of the metal insert and has material properties more conducive to cutting or punching than the metal insert. | 2022-02-17 |
20220051065 | RADIO FREQUENCY IDENTIFICATION FOR MULTI-DEVICE WIRELESS CHARGERS - Systems, methods and apparatus for wireless charging are disclosed. A charging device has multiple transmitting coils, a driver circuit configured to provide a charging current to the resonant circuit, and a controller. The charging cells may provide a charging surface. The driver circuit may be configured to provide a charging current to the transmitting coils. The charging device includes a radio interface configured for transmitting and receiving radio frequency identification (RFID) signals. The controller may be configured to transmit an interrogation signal configured to stimulate RFID tags through the radio interface when a chargeable device is initially placed on or near a surface of the wireless charger, refrain from initiating wireless charging of a chargeable device when a response to the interrogation signal is received, and negotiate a charging configuration when a response to the interrogation signal is not received. | 2022-02-17 |
20220051066 | READING APPARATUS - According to one embodiment, a reading apparatus includes a shielding body, an antenna, and a reader and writer. The shielding body is formed in a box shape with an upper opening, to place an accommodating body and to shield radio waves. The antenna is provided in the shielding body to receive information from an RFID tag attached to a product that is passing through the opening. The reader and writer is connected to the antenna to read information of the product from the information received by the antenna. | 2022-02-17 |
20220051067 | IDENTIFICATION CARD WITH A GLASS SUBSTRATE, IDENTIFICATION CARD WITH A CERAMIC SUBSTRATE AND MANUFACTURING METHODS THEREOF - The identification card includes a glass substrate provided with first and second surfaces facing opposite directions. The first surface of the glass substrate is provided with at least one first recess, and the identification card includes a first resin material layer filled in the first recess; and at least one of a first information storage medium, a first security feature, and a first decorative feature bonded to the first resin material layer. In a variation, an identification card has a ceramic substrate with first and second surfaces facing in opposite directions. The first surface of the ceramic substrate is provided with at least one first recess, and the identification card includes a first resin material layer filled in the first recess(es); and at least one of a first information storage medium, a first security feature, and a first decorative feature bonded to the first resin material layer. | 2022-02-17 |
20220051068 | MULTIMEDIA CARD AND MOBILE ELECTRONIC DEVICE - A multimedia card includes a substrate, and a main control chip, a memory chip, and an interface contacts that are disposed on the substrate. The main control chip and the memory chip are covered with a packaging layer. The interface contacts includes a power contact, configured to receive a first voltage that is input from the outside; and a transformer circuit is further disposed on the substrate, is coupled to the interface contacts, the main control chip, and the memory chip, and is configured to convert the input first voltage into a second voltage, to provide two types of power supplies with the first voltage and the second voltage for the main control chip and the memory chip. In the foregoing manner, an area of the multimedia card is reduced, and a quantity of working modes of the multimedia card increases. | 2022-02-17 |
20220051069 | RF TAG ANTENNA, RF TAG, TIRE PROVIDED WITH RF TAG, AND TIRE WITH BUILT-IN RF TAG - [Problem] To provide: an RF tag antenna which is able to be fitted to a tire that contains a steel wire and a carbon powder; an RF tag; a tire which is provided with an RF tag; and a tire with a built-in RF tag. [Solution] An RF tag antenna | 2022-02-17 |
20220051070 | IDEATION VIRTUAL ASSISTANT TOOLS - An intelligence-driven virtual assistant for automated documentation of new ideas is provided. During a brainstorming session, one or more user participants may discuss and identify one or more ideas. Such ideas may be tracked, catalogued, analyzed, developed, and further expanded upon through use of an intelligence-driven virtual assistant. Such virtual assistant may capture user input data embodying one or more new ideas and intelligently process the same in accordance with creativity tool workflows. Such workflows may further guide development and | 2022-02-17 |
20220051071 | DIGITAL PERSONAL ASSISTANT WITH CONFIGURABLE PERSONALITY - Auto-generation of dialog for a skill executed by a digital assistant is performed. Details descriptive of a digital assistant persona are received from a personality studio user interface. A personality type is generated based on the details. A standard vocabulary is auto-generated using the personality type. The standard vocabulary is exported for use in a skill. Prompts are auto-generated for the skill based on the standard vocabulary. | 2022-02-17 |
20220051072 | CONTROLLING CONVERSATIONAL DIGITAL ASSISTANT INTERACTIVITY - Interaction between a user and a conversational digital assistant executing on a computing device is controlled. Multiple interaction pairs are stored in one or more datastores accessible by the conversational digital assistant. Each interaction pair includes an interaction query and an associated assistance operation. An interactive engagement event is detected between the user and the conversational digital assistant, responsive to the storing operation. An interaction pair is selected from the one or more datastores, responsive to the operation of detecting an interactive engagement event. The interaction query of the selected interaction pair is communicated to the user. The assistance operation associated with the communicated interaction query is executed, responsive to receipt of a response from the user to the interaction query. | 2022-02-17 |
20220051073 | Integrated Assistance Platform - Systems and methods disclosed herein relate to autonomous agents. A first autonomous agent receives, from a first sensor, a first set of event data indicating events relating to a subject. The first autonomous agent provides the first set of event data to a data aggregator. The first autonomous agent receives, from the data aggregator, correlated event data including events sensed by the first autonomous agent and a second autonomous agent. The first autonomous agent applies machine learning model to the correlated event data to predict a first pattern of activity and determines, based on the first pattern of activity, that a first action is to be performed, causing the first actuator module to perform the first action. | 2022-02-17 |
20220051074 | QUANTITATIVE SPECTRAL DATA ANALYSIS AND PROCESSING METHOD BASED ON DEEP LEARNING - A quantitative spectral data analysis and processing method based on deep learning is provided. Pre-processing is not required to be performed on data in the disclosure. Effective information and background information may be learned from raw spectral data, and accuracy of quantitative spectral analysis is improved. In the disclosure, high-dimensional features are extracted from spectral data through three convolutional layers. Convolution kernels of 1×1 are adopted in the second layer, which reduces the dimensionality and the amount of calculation. Further, convolution kernels of three different sizes are adopted in the third convolutional layer, which learns features of different sizes hidden in the spectral data from the raw spectral data. In this disclosure, data is not pre-processed, and the original data may be directly processed. This method has a high generalization ability when a spectral noise distribution of a test set is different from that of a training set. | 2022-02-17 |
20220051075 | METHODS AND APPARATUSES FOR TRACKING WEAK SIGNAL TRACES - Systems, methods, apparatuses, and computer program products for tracking weak signal traces under severe noise and/or distortions. A method may include tracking at least one candidate frequency trace from a time-frequency representation of a signal. The method may also include identifying a frequency trace of the signal based on tracking results. In addition, the method may include outputting an estimated frequency vector related to the frequency trace. Further, the tracking may be performed under a noisy condition environment. | 2022-02-17 |
20220051076 | System and Method For Generating Parametric Activation Functions - The embodiments describe a technique for customizing activation functions automatically, resulting in reliable improvements in performance of deep learning networks. Evolutionary search is used to discover the general form of the function, and gradient descent to optimize its parameters for different parts of the network and over the learning process. The new approach discovers new parametric activation functions which improve performance over previous activation functions by utilizing a flexible search space that can represent activation functions in an arbitrary computation graph. In this manner, the activation functions are customized to both time and space for a given neural network architecture. | 2022-02-17 |
20220051077 | SYSTEM AND METHOD FOR SELECTING COMPONENTS IN DESIGNING MACHINE LEARNING MODELS - Disclosed are example embodiments of systems and methods for selecting components for building graph-based learning machines. An example system for selecting components for building graph-based learning machines includes a reference learning machine, one or more test signals, and a component analyzer module. The component analyzer module is configured to analyze, using the one or more test signals, one or more component in the reference machine by ranking different components in the reference learning machine in terms of their efficiency and effectiveness. | 2022-02-17 |
20220051078 | TRANSFORMER NEURAL NETWORK IN MEMORY - Apparatuses and methods can be related to implementing a transformer neural network in a memory. A transformer neural network can be implemented utilizing a resistive memory array. The memory array can comprise programmable memory cells that can be programed and used to store weights of the transformer neural network and perform operations consistent with the transformer neural network. | 2022-02-17 |
20220051079 | AUTO-ENCODING USING NEURAL NETWORK ARCHITECTURES BASED ON SYNAPTIC CONNECTIVITY GRAPHS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for selecting a neural network architecture for performing a prediction task for data elements of a specified data type. In one aspect, a method comprises: obtaining data defining a synaptic connectivity graph representing synaptic connectivity between neurons in a brain of a biological organism; generating a plurality of candidate graphs based on the synaptic connectivity graph; for each candidate graph of the plurality of candidate graphs: determining an auto-encoding neural network architecture based on the candidate graph; training an auto-encoding neural network having the auto-encoding neural network architecture to perform an auto-encoding task for data elements of the specified data type; and determining a performance measure characterizing a performance of the auto-encoding neural network in performing the auto-encoding task; and selecting the neural network architecture based on the performance measures. | 2022-02-17 |
20220051080 | SYSTEM, METHOD, AND COMPUTER PROGRAM FOR TRANSFORMER NEURAL NETWORKS - A system and method include one or more processing devices to implement a sequence of transformer neural networks, first and second sequence-to-sequence layers that each comprises a sequence of nodes, and an output layer to provide the first set and second set of score vectors to a downstream application of a natural language processing (NLP) task. | 2022-02-17 |
20220051081 | DATA PROCESSING METHOD, CORRESPONDING PROCESSING SYSTEM, SENSOR DEVICE AND COMPUTER PROGRAM PRODUCT - An embodiment method comprises applying domain transformation processing to a time-series of signal samples, received from a sensor coupled to a dynamical system, to produce a dataset of transformed signal samples therefrom, buffering the transformed signal samples, obtaining a data buffer having transformed signal samples as entries, computing statistical parameters of the data buffer, producing a drift signal indicative of the evolution of the dynamical system as a function of the computed statistical parameters, selecting transformed signal samples buffered in the data buffer as a function of the drift signal, applying normalization processing to the buffered transformed signal samples, applying auto-encoder artificial neural network processing to a dataset of resealed signal samples, and producing a dataset of reconstructed signal samples and calculating an error of reconstruction. The error of reconstruction reaching or failing to reach a threshold value is indicative of the evolution of dynamical system over time. | 2022-02-17 |
20220051082 | DATA-BASED ESTIMATION OF OPERATING BEHAVIOR OF AN MR DEVICE - A method for computer-aided estimation of an operating behavior for an MR device having a set of device component. The method includes providing a memory with a set of digital models, wherein each digital model simulates the operating behavior of a respective component of the MR device, and wherein the digital models are interconnected in accordance with the structure and/or functionality of the MR device to form a higher-order model which simulates the operating behavior of the MR device, acquiring operating data of the MR device, and in an inference phase, accessing by a processor of the memory having the acquired operating data to estimate the operating behavior of the MR device in order to output a result. | 2022-02-17 |
20220051083 | LEARNING WORD REPRESENTATIONS VIA COMMONSENSE REASONING - A method trains a recursive reasoning unit (RRU). The method receives a graph for a set of words and a matrix for a different set of words. The graph maps each word in the set of words to a node with node label and indicates a relation between adjacent nodes by an edge with edge label. The matrix indicates word co-occurrence frequency of the different set of words. The method discovers, by the RRU, reasoning paths from the graph for word pairs by mapping word pairs from the set of words into a source word and a destination word and finding the reasoning paths therebetween. The method predicts word co-occurrence frequency using the reasoning paths. The method updates, responsive to the word co-occurrence frequency, model parameters of the RRU until a difference between a predicted and true word occurrence are less than a threshold amount to provide a trained RRU. | 2022-02-17 |
20220051084 | METHOD AND APPARATUS WITH CONVOLUTION OPERATION PROCESSING BASED ON REDUNDANCY REDUCTION - A processor-implemented neural network layer convolution operation method includes: obtaining a first input plane of an input feature map and a first weight plane of a weight kernel; generating base planes, corresponding to an intermediate operation result of the first input plane, based on at least a portion of available weight values of the weight kernel; generating first accumulation data based on at least one plane corresponding to weight element values of the first weight plane among the first input plane and the base planes; and generating a first output plane of an output feature map based on the first accumulation data. | 2022-02-17 |
20220051085 | RUNTIME HYPER-HETEROGENEOUS OPTIMIZATION FOR PROCESSING CIRCUITS EXECUTING INFERENCE MODEL - The present invention provides an electronic device including a plurality of processing circuits is disclosed, wherein the apparatus includes a circuitry configured to perform the steps of: receiving a model an input data for execution; analyzing the model to obtain a graph partition size of the model; partitioning the model into a plurality of graphs based on the graph partition size, wherein each of the graphs comprises a portion of operations of the model; deploying the plurality of graphs to at least two of the processing circuits, respectively; and generating output data according to results of the at least two of the processing circuits executing the plurality of graphs. | 2022-02-17 |
20220051086 | VECTOR ACCELERATOR FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING - The present disclosure provides an accelerator for processing a vector or matrix operation. The accelerator comprises a vector processing unit comprising a plurality of computation units having circuitry configured to process a vector operation in parallel; a matrix multiplication unit comprising a first matrix multiplication operator, a second matrix multiplication operator, and an accumulator, the first matrix multiplication operator and the second matrix multiplication operator having circuitry configured to process a matrix operation and the accumulator having circuitry configured to accumulate output results of the first matrix multiplication operator and the second matrix multiplication operator; and a memory storing input data for the vector operation or the matrix operation and being configured to communicate with the vector processing unit and the matrix multiplication unit. | 2022-02-17 |
20220051087 | Neural Network Architecture Using Convolution Engine Filter Weight Buffers - Hardware for implementing a Deep Neural Network (DNN) having a convolution layer, the hardware comprising a plurality of convolution engines each configured to perform convolution operations by applying filters to data windows, each filter comprising a set of weights for combination with respective data values of a data window; and one or more weight buffers accessible to each of the plurality of convolution engines over an interconnect, each weight buffer being configured to provide weights of one or more filters to any of the plurality of convolution engines; wherein each of the convolution engines comprises control logic configured to request weights of a filter from the weight buffers using an identifier of that filter. | 2022-02-17 |
20220051088 | ARTIFICIAL INTELLIGENCE ACCELERATOR, ARTIFICIAL INTELLIGENCE ACCELERATION DEVICE, ARTIFICIAL INTELLIGENCE ACCELERATION CHIP, AND DATA PROCESSING METHOD - An artificial intelligence accelerator, a device, a chip, and a data processing method are provided. The artificial intelligence accelerator has a capability to respectively process data with a depth of a second quantity in parallel by using a first quantity of operation functions, and includes a control unit, a computing engine, a group control unit, and a group cache unit. The control unit is configured to parse a processing instruction for a target network layer in a neural network model to obtain a concurrent instruction, the computing engine is configured to perform parallel processing on a target input tile in the input data set according to the concurrent instruction to obtain target output data corresponding to the target input tile, and the group control unit is configured to store, by group, the target output data into at least one output cache of the group cache unit. | 2022-02-17 |
20220051089 | Neural Network Accelerator in DIMM Form Factor - The technology relates to a neural network dual in-line memory module (NN-DIMM), a microelectronic system comprising a CPU and a plurality of the NN-DIMMs, and a method of transferring information between the CPU and the plurality of the NN-DIMMS. The NN-DIMM may include a module card having a plurality of parallel edge contacts adjacent to an edge of a slot connector thereof and configured to have the same command and signal interface as a standard dual in-line memory module (DIMM). The NN-DIMM may also include a deep neural network (DNN) accelerator affixed to the module card, and a bridge configured to transfer information between the DNN accelerator and the plurality of parallel edge contacts via a DIMM external interface. | 2022-02-17 |
20220051090 | CLASSIFICATION METHOD USING A KNOWLEDGE GRAPH MODULE AND A MACHINE LEARNING MODULE - An approach for determining a concatenated confidence value of a first class using an artificial-intelligence module (AI-module) for performing a classification based on the concatenated confidence value of the first class. The AI-module comprises a knowledge graph module, a machine learning module, and a weighting module. A processor determines a first confidence value of the first class as a first function of an input dataset using the machine learning module. A processor determines a second confidence value of the first class as a second function of the input dataset using the knowledge graph module. A processor determines the concatenated confidence value of the first class as a third function of the first confidence value of the first class, the second confidence value of the first class, and a value of a weighting parameter of the weighting module. | 2022-02-17 |
20220051091 | DETECTION OF ACTIVATION IN ELECTROGRAMS USING NEURAL-NETWORK-TRAINED PREPROCESSING OF INTRACARDIAC ELECTROGRAMS - A method includes collecting a plurality of bipolar electrograms and respective unipolar electrograms of patients, the electrograms including annotations in which one or more human reviewers have identified and marked a window-of-interest and one or more activation times inside the window-of-interest. A ground truth data set is generated from the electrograms, for training at least one electrogram-preprocessing step of a Machine Learning (ML) algorithm. The ML algorithm is applied to the electrograms, to at least train the at least one electrogram-preprocessing step, so as to detect an occurrence of an activation in a given bipolar electrogram within the window-of-interest. | 2022-02-17 |
20220051092 | SYSTEM AND METHODS FOR TRANSLATING ERROR MESSAGES - The disclosed systems and methods may receive a first stack trace and a first user classification and determine whether the first user classification is an administrator. When the first user classification is not the administrator, the systems and methods may identify and redact, using a first neural network, first sensitive information from the first stack trace to generate a redacted first stack trace, encode, using a second neural network, the redacted first stack trace to generate a first embedding, decode, using a third neural network, the first embedding to generate a first text explanation corresponding to the redacted first stack trace, decode, using a fourth neural network, the first embedding to generate a second stack trace corresponding to the redacted first stack trace, and transmit, to a first user device for display, the first text explanation and the second stack trace. | 2022-02-17 |
20220051093 | TECHNIQUES FOR TRAINING AND INFERENCE USING MULTIPLE PROCESSOR RESOURCES - Apparatuses, systems, and techniques for neural network training and inference using multiple processor resources. In at least one embodiment, one or more neural networks are used to generate one or more second versions of one or more images based, at least in part, on a first version of the one or more images and a three-dimensional representation of the one or more first versions of the one or more images. | 2022-02-17 |
20220051094 | MESH BASED CONVOLUTIONAL NEURAL NETWORK TECHNIQUES - Convolutional operators for triangle meshes are determined to construct one or more neural networks. In at least one embodiment, convolutional operators, pooling operators, and unpooling operators are determined to construct the one or more neural networks, in which the same learned weights from the one or more neural networks can further be used for triangle meshes with different topologies. | 2022-02-17 |
20220051095 | Machine Learning Computer - A computer comprising a plurality of processing units, each processing unit having an execution unit and access to computer memory which stores code executable by the execution unit and input values of an input vector to be processed by the code, the code, when executed, configured to access the computer memory to obtain multiple pairs of input values of the input vector, determine a maximum or corrected maximum input value of each pair as a maximum result element, determine and store in a computer memory a maximum or corrected maximum result of each pair of maximum result elements as an approximation to the natural log of the sum of the exponents of the input values and access the computer memory to obtain each input value and apply it to the maximum or corrected maximum result to generate each output value of a Softmax output vector. | 2022-02-17 |
20220051096 | Method and Apparatus for Training a Quantized Classifier - A computer-implemented method for training a classifier is disclosed. The classifier is designed to determine an output (y) for an input data point (x). The output (y) characterizes a classification of the input data point (x). The classifier comprises a multiplicity of weights on the basis of which the output (y) is determined. At least one weight of the multiplicity of weights is quantized to a predefined first number of first values. Each two consecutive first values differ by a distance value. The distance value is also adjusted for training and the multiplicity of weights is not adjusted. | 2022-02-17 |
20220051097 | CONTROL DEVICE AND CONVERTER - A control device that controls an output voltage of a converter includes: a neural network configured to generate a control signal for controlling a power supply block based on a detection signal from an output stage of the power supply block that supplies power to a load of the converter; a model generator configured to generate a model of a nonlinear dynamic system by machine-learning from the detection signal; a model storage configured to store the model generated by the model generator; and a model switch configured to switch to a model, which is selected from models stored in the model storage and is optimal for a latest detection signal, and provide the switched model to the neural network, wherein the neural network generates the control signal based on a future output voltage predicted by using the model provided by the model switch. | 2022-02-17 |
20220051098 | VOICE ACTIVATED, MACHINE LEARNING SYSTEM FOR ITERATIVE AND CONTEMPORANEOUS RECIPE PREPARATION AND RECORDATION - A system and process are provided for assisting a user to formulate and document a recipe as the user creates the recipe and cooks in real time. The user may speak to the system to describe or dictate appearances, quantities, ingredients, cooking time, and other factors and conditions, and the system interpolates, extrapolates, interacts, and makes suggestions to the user to complete and record the recipe without interfering with or halting the culinary process. As the system works with the user, the system grows in intelligence through an iterative learning process to become an AI sous chef. | 2022-02-17 |
20220051099 | ATTENTION-BASED SEQUENCE TRANSDUCTION NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating an output sequence from an input sequence. In one aspect, one of the systems includes an encoder neural network configured to receive the input sequence and generate encoded representations of the network inputs, the encoder neural network comprising a sequence of one or more encoder subnetworks, each encoder subnetwork configured to receive a respective encoder subnetwork input for each of the input positions and to generate a respective subnetwork output for each of the input positions, and each encoder subnetwork comprising: an encoder self-attention sub-layer that is configured to receive the subnetwork input for each of the input positions and, for each particular input position in the input order: apply an attention mechanism over the encoder subnetwork inputs using one or more queries derived from the encoder subnetwork input at the particular input position. | 2022-02-17 |
20220051100 | INTELLIGENT REGULARIZATION OF NEURAL NETWORK ARCHITECTURES - A trained computer model includes a direct network and an indirect network. The indirect network generates expected weights or an expected weight distribution for the nodes and layers of the direct network. These expected characteristics may be used to regularize training of the direct network weights and encourage the direct network weights towards those expected, or predicted by the indirect network. Alternatively, the expected weight distribution may be used to probabilistically predict the output of the direct network according to the likelihood of different weights or weight sets provided by the expected weight distribution. The output may be generated by sampling weight sets from the distribution and evaluating the sampled weight sets. | 2022-02-17 |
20220051101 | METHOD AND APPARATUS FOR COMPRESSING AND ACCELERATING MULTI-RATE NEURAL IMAGE COMPRESSION MODEL BY MICRO-STRUCTURED NESTED MASKS AND WEIGHT UNIFICATION - A method of multi-rate neural image compression is performed by at least one processor and includes selecting encoding masks, based on a first hyperparameter, and performing a convolution of a first plurality of weights of a first neural network and the selected encoding masks to obtain first masked weights. The method further includes encoding an input image to obtain an encoded representation, using the first masked weights, and encoding the obtained encoded representation to obtain a compressed representation. | 2022-02-17 |
20220051102 | METHOD AND APPARATUS FOR MULTI-RATE NEURAL IMAGE COMPRESSION WITH STACKABLE NESTED MODEL STRUCTURES AND MICRO-STRUCTURED WEIGHT UNIFICATION - A method of multi-rate neural image compression with stackable nested model structures is performed by at least one processor and includes iteratively stacking, on a first set of weights of a first neural network, a first plurality of sets of weights of a first plurality of stackable neural networks corresponding to a current hyperparameter, wherein the first set of weights of the first neural network remains unchanged, encoding an input image to obtain an encoded representation, using the first set of weights of the first neural network on which the first plurality of sets of weights of the first plurality of stackable neural networks is stacked, and encoding the obtained encoded representation to determine a compressed representation. | 2022-02-17 |
20220051103 | SYSTEM AND METHOD FOR COMPRESSING CONVOLUTIONAL NEURAL NETWORKS - An apparatus is provided to compress CNN models using a combination of filter pruning and tensor decomposition. For example, the apparatus accesses a trained CNN that includes convolutional tensors. The apparatus prunes the filters of a convolutional tensor to generate a sparse tensor. Further, the apparatus decomposes the sparse tensor to generate a low-rank approximation of the sparce tensor. The low-rank approximation of the sparse tensor includes a core tensor and principal tensors. The apparatus generates a convolutional flow that includes the core tensor and convolutional operations generated based on the principal tensors. The apparatus may replace some or all the convolutional tensors in the trained CNN with the corresponding convolutional flows. The apparatus may fine-tune the updated CNN by re-training the updated CNN. The number of epochs for re-training the updated CNN may be smaller than the number of epochs for training the CNN. | 2022-02-17 |
20220051104 | ACCELERATING INFERENCE OF TRADITIONAL ML PIPELINES WITH NEURAL NETWORK FRAMEWORKS - Methods, systems, and computer program products are provided for generating a neural network model. A ML pipeline parser is configured to identify a set of ML operators for a previously trained ML pipeline, and map the set of ML operators to a set of neural network operators. The ML pipeline parser generates a first neural network representation using the set of neural network operators. A neural network optimizer is configured to perform an optimization on the first neural network representation to generate a second neural network representation. A tensor set provider outputs a set of tensor operations based on the second neural network representation for execution on a neural network framework. In this manner, a traditional ML pipeline can be converted into a neural network pipeline that may be executed on an appropriate framework, such as one that utilizes specialized hardware accelerators. | 2022-02-17 |
20220051105 | TRAINING TEACHER MACHINE LEARNING MODELS USING LOSSLESS AND LOSSY BRANCHES - Some embodiments of the present invention are directed to techniques for training teacher neural networks (TNNs) and student neural networks (SNNs). A training data set is received with a lossless set of data and a corresponding lossy set of data. Two branches of a TNN are established, with one branch trained using the lossless data (a lossless branch) and one trained using the lossy data (a lossy branch). Weights for the two branches are tied together. The lossy branch, now isolated from the lossless branch, generates a set of soft targets for initializing an SNN. These generated soft targets benefit from the training of lossless branch through the weights that were tied together between each branch, despite isolating the lossless branch from the lossy branch during soft-target generation. | 2022-02-17 |
20220051106 | METHOD FOR TRAINING VIRTUAL ANIMAL TO MOVE BASED ON CONTROL PARAMETERS - A method for training a virtual animal to move based on control parameters comprises an imitation learning stage and an adaptive control stage. The imitation learning stage includes obtaining a first momentum, a second momentum, a current state and a target state of a reference animal associated with the virtual animal, analyzing the first and second momentum to generate primitive distributions, and training a first gating network to generate a first primitive influence so as to convert the current state to the target state. The adaptive control stage includes obtaining a control parameter set, training a second gating network to generate a second primitive influence so as to convert the current state to a combination of the current state and the control parameter set, and generating a determination result according to the first and second primitive influences to update the second gating network. | 2022-02-17 |
20220051107 | GENETIC MODELING FOR ATTRIBUTE SELECTION WITHIN A CLUSTER - Genetic modeling is used to generate new term sets from existing term sets within a population cluster. Term attributes corresponding to a first plurality of term sets are encoded as genes for a computer-executed genetic algorithm. Clients are clustered in categories based on client attributes. The genetics algorithm is applied to a category of clustered clients to distribute the term sets to clients in the category. A first subset of terms sets is removed after receiving, within a first duration of time, a number of client responses that falls below a first threshold. A second subset of term sets is retained after receiving, within a second duration of time, a number of client responses above a second threshold. The second subset is bred using the genetic algorithm and a second plurality of term sets is generated based on the results. | 2022-02-17 |
20220051108 | METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR CONTROLLING GENETIC LEARNING FOR PREDICTIVE MODELS USING PREDEFINED STRATEGIES - Methods for controlling genetic learning for predictive models using predefined strategies may include, for each of a plurality of agents, selecting a type of predictive model. A strategy may be selected from predefined strategies. Candidate genomes may be generated and may include a plurality of genes. Each gene may be associated with a feature of the agent predictive model. A fit of each candidate genome to the agent strategy may be determined. A candidate genome may be selected based on the fit. For each of a plurality of epochs, a plurality of training iterations may be performed for each agent. A fitness of each agent predictive model may be determined. A subset of agents with a highest fitness may be determined. For each agent of the subset, at least one new agent may be generated. The genomes of the new agents may be merged with some genomes of the subset. | 2022-02-17 |
20220051109 | SYSTEM OF INTELLIGENCE LEARNING AGENTS LEVERAGING EXPERTISE CAPTURE FOR WORK HEURISTICS MANAGEMENT - A system of intelligence learning agents with work heuristics management is disclosed, which eliminates the need for programmers through the use of machine learning mechanisms that continuously adjusts to changes to business processes over time and stays current with evolving business rules. Workers processes work as they always have with no interruptions or additional training required. A system for generating intelligent software learning agents with heuristics management is further disclosed. | 2022-02-17 |
20220051110 | NEIGHBORHOOD-BASED ENTITY RESOLUTION SYSTEM AND METHOD - A method for resolving entities in a knowledge graph including determining node sets in the knowledge graph, determining each of the node sets includes determining a first node, determining a second node in a semantic neighborhood of the first node, and determining a third node in the semantic neighborhood of the first node. For each node set, the second node and the third node are compared, and it is determined that the second node and the third node are a similar node pair. For each similar node pair, the first nodes of the node sets are aggregated, and a quantity of overlapping of a semantic neighborhood of the second node and a semantic neighborhood of the third node is determined, and for each similar node pair, the second and third nodes are resolved as a single entity. | 2022-02-17 |
20220051111 | KNOWLEDGE GRAPH ENHANCEMENT BY PRIORITIZING CARDINAL NODES - This document describes knowledge graph systems that determine cardinal nodes in a knowledge graph that provide the most impact on target nodes of a system and improves the system by adjusting the impact of the actual elements represented by the cardinal nodes. In one aspect, a method includes obtaining a knowledge graph that represents a given system and that includes multiple nodes that each represent an element of the given system. One or more target nodes are identified in the knowledge graph based on a value parameter for each node in the knowledge graph. A cardinal value that represents an impact that the node has on the one or more target nodes is determined for each node in the knowledge graph. A priority order of the nodes is determined for improvement based on the cardinal values. Data indicating one or more of the nodes is provided based on the order. | 2022-02-17 |
20220051112 | AUTOMATED MODEL PIPELINE GENERATION WITH ENTITY MONITORING, INTERACTION, AND INTERVENTION - Systems, computer-implemented methods, and computer program products to facilitate automated model pipeline generation with entity monitoring, interaction, and/or intervention are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise an interaction backend handler component that provides a recommended input action corresponding to a model pipeline candidate being evaluated in an automated model pipeline generation process. The computer executable components can further comprise a visualization render component that renders an input visualization corresponding to the model pipeline candidate based on the recommended input action. | 2022-02-17 |
20220051113 | NETWORK DEVICE IDENTIFICATION - Network device identification. A method includes extracting, from network traffic data of a plurality of user devices in a computer network, one or more data fragments relating to a device model of each user device, associating the one or more data fragments with device identification data assigned to each user device, determining a device model for a specific data fragment based on analyzing one or more data fields associated with the specific data fragment, and generating one or more device model identification rules based on the specific data fragment. | 2022-02-17 |
20220051114 | INFERENCE PROCESS VISUALIZATION SYSTEM FOR MEDICAL SCANS - An inference process visualization system is configured to generate inference process visualization data for a medical scan indicating an inference process flow of plurality of sub-models applied to the medical scan and further indicating a plurality of inference data for the medical scan generated by applying the plurality of sub-models in accordance with the inference process flow. The inference process visualization system is further configured to facilitate display of the inference process visualization data via an interactive interface. | 2022-02-17 |
20220051115 | Control Platform Using Machine Learning for Remote Mobile Device Wake Up - Aspects of the disclosure relate to using machine learning for remote wake up of a mobile device. A computing platform may receive historical data corresponding to driving trip patterns. The computing platform may train a machine learning model using the historical data corresponding to the driving trip patterns. The computing platform may receive initial data corresponding to a particular individual, and input the initial data into the machine learning model, which may cause output of a predicted trip start time of a driving trip of the particular individual. The computing platform may send, to a mobile device corresponding to the particular individual, one or more commands directing the mobile device to wake up prior to the predicted trip start time and to initiate collection of driving trip data corresponding to the driving trip, which may cause the mobile device to be configured for the collection of driving trip data. | 2022-02-17 |
20220051116 | PARALLELIZED SCORING FOR ENSEMBLE MODEL - Provided are a computer-implemented method, a system, and a computer program product. The method comprises extracting features from a plurality of base models in an ensemble model. The plurality of base models are configured to provide respective prediction results. The ensemble model is configured to provide an overall prediction result from the prediction results of the plurality of base models. The features are associated with time performance of the base models. The method further comprises clustering the plurality of base models into a plurality of clusters based on the extracted features. The method further comprises assigning the plurality of base models to a plurality of parallel computation units based on the plurality of clusters. | 2022-02-17 |
20220051117 | FIELD DATA MONITORING DEVICE, FIELD DATA MONITORING METHOD, AND FIELD DATA DISPLAY DEVICE - A field data monitoring device comprises a field data database which accumulates field data; a failure mode database which records a failure mode list and a failure-mode word-probability table, the failure mode list recording names of failure modes of products and occurrence probabilities of the failure modes, and the table holding appearance probabilities of words in the field data; a design production operation database which accumulates data of the products; a failure-mode estimating section that calculates attribution probabilities of the field data to the failure modes based on the occurrence probabilities and the appearance probabilities, the failure-mode estimating section classifying the field data according to the failure modes; and a failure-mode cause finding section that extracts conditions under which the failure modes easily occur, the conditions being extracted from the data in the design production operation database and of the products associated with the classified field data. | 2022-02-17 |
20220051118 | SITE CHARACTERIZATION FOR AGRICULTURE - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for characterization of a physical site. One of the methods includes obtaining, for each of one or more physical locations corresponding to a respective coordinate at a surface of a growing medium at the locations, sensor data comprising a sensor profile generated from measurements taken by each of a plurality of sensors on a sensor unit passing through the respective coordinate at a plurality of different depth levels within the growing medium at the location; providing the sensor data as input to one or more probabilistic models configured to receive the sensor data comprising the respective sensor profiles to predict one or more characteristics of the growing medium at each of the physical locations; and obtaining, as output from the one or more probabilistic models, the one or more predicted characteristics for each physical location. | 2022-02-17 |
20220051119 | MACHINE-LEARNING MODELS TO FACILITATE USER RETENTION FOR SOFTWARE APPLICATIONS - Systems described herein apply an ordered combination machine-learning models to identify users who are likely to abandon use of an application, predict the reasons why those users are likely to abandon, and identify intervening actions that the application can perform to reduce the probability that the users will abandon the application. A first machine-learning model determines a retention-prediction value indicating a probability that the user will complete a target action in the application before a session terminates. If the retention-prediction value satisfies a threshold condition, a second machine-learning model determines a reason why the session is likely to terminate before the user completes the target action. A third machine-learning model determines an intervention action for the application to perform to increase the probability that the user will complete the target action before the session terminates. | 2022-02-17 |
20220051120 | INFORMATION PROCESSING SYSTEM - According to an embodiment, an information processing system solves a combinatorial optimization problem. The information processing system includes an Ising machine and a host unit. The Ising machine is hardware configured to perform a search process for searching for the ground state of an Ising model that represents the combinatorial optimization problem. The host unit is hardware connected to the Ising machine via an interface and configured to control the Ising machine. In the search process, for each of a plurality of Ising spins, the Ising machine alternately repeats an auxiliary variable update process for updating an auxiliary variable by a main variable and a main variable update process for updating the main variable by the auxiliary variable multiple times. Prior to the search process, the host unit transmits, to the Ising machine, an initial value of the auxiliary variable corresponding to each of the plurality of Ising spins. | 2022-02-17 |
20220051121 | PERFORMING AUTOMATIC QUBIT RELOCATION - Performing automatic qubit relocation is disclosed herein. A processor device of a first quantum computing device receives a system stress indicator from a system monitor that tracks a status of the first quantum computing device and/or a status of qubits maintained by the first quantum computing device. A relocation rule is applied to the system stress indicator to determine whether one or more qubits located at the first quantum computing device are to be relocated. If so, the one or more qubits are relocated from the first quantum computing device to a second quantum computing device (e.g., by physically transporting the qubits via a quantum channel, or by teleporting the qubits using pairs of entangled qubits, as non-limiting examples). The processor device also updates qubit registry records for the one or more qubits to indicate that the one or more qubits have been relocated. | 2022-02-17 |
20220051122 | VIBRATIONALLY ISOLATED CRYOGENIC SHIELD FOR LOCAL HIGH-QUALITY VACUUM - The disclosure describes various aspects of a vibrationally isolated cryogenic shield for local high-quality vacuum. More specifically, the disclosure describes a cryogenic vacuum system replicated in a small volume in a mostly room temperature ultra-high vacuum (UHV) system by capping the volume with a suspended cryogenic cold finger coated with a high surface area sorption material to produce a localized extreme high vacuum (XHV) or near-XHV region. The system is designed to ensure that all paths from outgassing materials to the control volume, including multiple bounce paths off other warm surfaces, require at least one bounce off of the high surface area sorption material on the cold finger. The outgassing materials can therefore be pumped before reaching the control volume. To minimize vibrations, the cold finger is only loosely, mechanically connected to the rest of the chamber, and the isolated along with the cryogenic system via soft vacuum bellows. | 2022-02-17 |
20220051123 | MODULAR AND DYNAMIC DIGITAL CONTROL IN A QUANTUM CONTROLLER - A quantum controller comprises a quantum control pulse generation circuit and digital signal management circuit. The quantum control pulse generation circuit is operable to generate a quantum control pulse which can be processed by any of a plurality of controlled circuits, and generate a first digital signal which can be routed to any of the plurality of controlled circuits. The digital signal management circuit is operable to detect, during runtime, to which one or more of the plurality of controlled circuits the first digital signal is to be routed, to manipulate the first digital signal based on the one or more of the plurality of controlled circuits to which the first digital signal is to be routed, where the manipulation results in one or more manipulated digital signals, and to route the one or more manipulated digital signals to one or more of the plurality of controlled circuits. | 2022-02-17 |
20220051124 | APPARATUS AND METHODS FOR GAUSSIAN BOSON SAMPLING - An apparatus includes a light source to provide a plurality of input optical modes in a squeezed state. The apparatus also includes a network of interconnected reconfigurable beam splitters (RBSs) configured to perform a unitary transformation of the plurality of input optical modes to generate a plurality of output optical modes. An array of photon counting detectors is in optical communication with the network of interconnected RBSs and configured to measure the number of photons in each mode of the plurality of the output optical modes after the unitary transformation. The apparatus also includes a controller operatively coupled to the light source and the network of interconnected RBSs. The controller is configured to control at least one of the squeezing factor of the squeezed state of light, the angle of the unitary transformation, or the phase of the unitary transformation. | 2022-02-17 |
20220051125 | INTELLIGENT CLUSTERING OF ACCOUNT COMMUNITIES FOR ACCOUNT FEATURE ADJUSTMENT - There are provided systems and methods for intelligent clustering of account communities for account feature adjustment. A user may utilize an online account with the service provider to process transactions electronic. However, initially the account may be unverified or otherwise untrusted, and limits on electronic transaction processing may be imposed on the account. To provide intelligent feature adjustment, the service provider may utilize a machine learning technique to cluster verified accounts into communities based on their data representations. Thereafter, when a transaction by an unverified account violates a limit imposed on the unverified account, the service provider may process the unverified account using a machine learning model trained for account correlations to the community clusters in order to determine a corresponding community cluster of accounts and adjust a feature based on behaviors and traits of the verified accounts in the community cluster. | 2022-02-17 |
20220051126 | CLASSIFICATION OF ERRONEOUS CELL DATA - Classification of erroneous cell data includes performing unsupervised pre-training of a machine learning model to learn a bidirectional encoder representation of data cells, obtaining an initial training set, with labeled training examples that correlate observed cell data to correct cell data, for training the machine learning model to classify cell data, automatically augmenting the initial training set to produce an augmented training set, where the augmenting includes identifying patterns in the labeled training examples, generating transformation functions, and using the transformation functions, learning an augmentation strategy and automatically generating additional training examples correlating erroneous data values to correct data values, and training the machine learning model using the augmented training set. | 2022-02-17 |
20220051127 | MACHINE LEARNING BASED ANALYSIS OF ELECTRONIC COMMUNICATIONS - Aspects of the disclosure relate to machine learning based analysis of electronic communications. A computing platform may monitor receipt of a potentially unacceptable electronic communication by a user. Then, the computing platform may extract one or more attributes of the potentially unacceptable electronic communication. The computing platform may then perform, based on the one or more attributes, textual analysis of the potentially unacceptable electronic communication. Subsequently, the computing platform may retrieve one or more rules applicable to the potentially unacceptable electronic communication. Then, the computing platform may determine, based on the textual analysis and the one or more rules, and based on a repository of previously identified unacceptable content, whether the potentially unacceptable electronic communication is unacceptable. Subsequently, the computing platform may trigger, based on a determination that the potentially unacceptable electronic communication is unacceptable, one or more actions associated with the unacceptable electronic communication. | 2022-02-17 |
20220051128 | PREDICTING CUSTOMER INTERACTION OUTCOMES - Predictive analysis of customer relationship management elements by receiving service feature data associated with past services, receiving customer feature data, including customer interaction outcome data, for a set of customers associated with the past service, training a machine learning model according to the received feature data and customer feature data, and providing the trained machine learning model to a user, the model configured for predicting a future customer interaction outcome probability according to service feature data associated with a current service, and customer feature data associated with customers of the current service. | 2022-02-17 |
20220051129 | BLOCKCHAIN-ENABLED MODEL DRIFT MANAGEMENT - A scheduler node in a blockchain network may receive data associated with a machine learning model. The scheduler node may measure a drift of the machine learning model for a first aspect of the data. The scheduler node may determine if the drift of the machine learning model is greater than a threshold. The scheduler node may schedule, in response to the drift being greater than the drift threshold, a retraining transaction for the machine learning model. | 2022-02-17 |
20220051130 | BID VALUE DETERMINATION - One or more computing devices, systems, and/or methods are provided. Shaded bid values may be determined and/or submitted to one or more auction modules for participation in auctions. Auction information including at least one of minimum bid values to win associated with the auctions, sets of features associated with the auctions, the shaded bid values associated with the auctions, unshaded bid values associated with the auctions, etc. may be stored in a database. A machine learning model may be trained using a loss function and/or the auction information to generate a first machine learning model with feature parameters associated with features. A bid request, indicative of a second set of features, may be received. The first machine learning model may be used to determine a shaded bid value for submission based upon one or more first feature parameters, of the feature parameters, associated with the second set of features. | 2022-02-17 |
20220051131 | BID VALUE DETERMINATION FOR A FIRST-PRICE AUCTION - Shaded bid values may be determined and/or submitted to one or more auction modules for participation in auctions. Auction information including at least one of impression indications associated with the auctions, sets of features associated with the auctions, the shaded bid values associated with the auctions, etc. may be stored in a database. A machine learning model may be trained using the auction information to generate a first machine learning model with feature parameters associated with features. A bid request, indicative of a second set of features, may be received. The first machine learning model may be used to determine win probabilities and/or expected bid surpluses associated with multiple shaded bid values based upon one or more feature parameters, of the feature parameters, associated with the second set of features. A shaded bid value for submission may be determined based upon the win probabilities and/or the expected bid surpluses. | 2022-02-17 |
20220051132 | IDENTIFYING NOISE IN VERBAL FEEDBACK USING ARTIFICIAL TEXT FROM NON-TEXTUAL PARAMETERS AND TRANSFER LEARNING - Methods and systems are provided for classifying free-text content using machine learning. Free-text content (e.g., customer feedback) and parameter values organized according to a schema are received. A free-text corpus is generated, and an artificial-text corpus is generated by applying rules to the parameter values. The artificial-text corpus is generated by converting the parameter values into a finite set of words based on the rules and concatenating the words of the finite set of words into a fixed sequence wordlist. Feature vectors (e.g., sentence embeddings) based on the free-text corpus and the artificial-text corpus are combined and forwarded to a machine learning model for classification. The machine learning model may be trained with a bias towards a specified metric (e.g., precision, recall, F1 score). The model may be trained using transfer learning with training data from a different category of free-text content (e.g., a different category of customer feedback). | 2022-02-17 |
20220051133 | DECENTRALIZED MULTI-TASK LEARNING - A method for decentralized multi-task learning includes publishing metadata associated with a first task. A plurality of parameter vectors associated with a set of similar tasks to the first task is obtained and the set of similar tasks is associated with a plurality of other participants. A parameter vector associated with a machine learning dataset for the first task is trained based on a loss function associated with the first task and the plurality of parameter vectors associated with the set of similar tasks. The parameter vector associated with the machine learning dataset for the first task is published. | 2022-02-17 |
20220051134 | EXTRACTED MODEL ADVERSARIES FOR IMPROVED BLACK BOX ATTACKS - Techniques are described for identifying successful adversarial attacks for a black box reading comprehension model using an extracted white box reading comprehension model. The system trains a white box reading comprehension model that behaves similar to the black box reading comprehension model using the set of queries and corresponding responses from the black box reading comprehension model as training data. The system tests adversarial attacks, involving modified informational content for execution of queries, against the trained white box reading comprehension model. Queries used for successful attacks on the white box model may be applied to the black box model itself as part of a black box improvement process. | 2022-02-17 |
20220051135 | LOAD BALANCING USING DATA-EFFICIENT LEARNING - Rapid and data-efficient training of an artificial intelligence (AI) algorithm are disclosed. Ground truth data are not available and a policy must be learned based on limited interactions with a system. A policy bank is used to explore different policies on a target system with shallow probing. A target policy is chosen by comparing a good policy from the shallow probing with a base target policy which has evolved over other learning experiences. The target policy then interacts with the target system and a replay buffer is built up. The base target policy is then updated using gradients found with respect to the transition experience stored in the replay buffer. The base target policy is quickly learned and is robust for application to new, unseen, systems. | 2022-02-17 |
20220051136 | SYSTEM FOR AN ENTERPRISE-WIDE DATA COORDINATOR MODULE - Systems and methods for managing multiple robotic agents in an enterprise. The robotic agents share their inputs and outputs with a data coordinator module. The coordinator module, through that data, learns the enterprise's goals and values and learns to optimize robotic agents on both a per section and on a per agent basis. The data is useful for training future versions of the coordinator module as well for training machine learning modules that aim to further the enterprise's goals. | 2022-02-17 |
20220051137 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD AND NON-TRANSITORY RECORDING MEDIUM - An information processing system that carries out a specified processing based on a learning model, comprises: a processor; a programmable logic device that rewrites logic data and reconstitutes a circuit; a machine learning processing unit that carries out machine learning and generates a new learning model for the specified processing; a convertor that converts the new learning model into the logic data that is operable in the programmable logic device; and a controller that enables the processor to carry out the specified processing based on the new learning model while the time the new learning model is converted into the logic data by the convertor. | 2022-02-17 |
20220051138 | METHOD AND DEVICE FOR TRANSFER LEARNING BETWEEN MODIFIED TASKS - A method for the transfer learning of hyperparameters of a machine learning algorithm. The method includes providing a current search space and a previous search space. A reduced search space is then created and candidate configurations are drawn repeatedly at random from the reduced search space and from the current search space, and the machine learning algorithm, parameterized in each case with the candidate configurations, is applied. A Tree Parzen Estimator (TPE) is then created as a function of the candidate solutions and the results of the machine learning algorithm parameterized with the candidate configurations, and the drawing of further candidate configurations from the current search space using the TPE is repeated multiple times, the TPE being updated upon each drawing. | 2022-02-17 |
20220051139 | WIRELESS DEVICE, A NETWORK NODE AND METHODS THEREIN FOR TRAINING OF A MACHINE LEARNING MODEL - A wireless device and a method therein for assisting a network node to perform training of a machine learning model. The wireless device collects a number of successive data samples. Further, the wireless device successively creates compressed data by associating each collected data sample to a cluster. The cluster has a cluster centroid, a cluster counter representative of a number of collected data samples determined to be normal and being associated with the cluster, and a number of outlier collected data samples associated with the cluster. Then, the wireless device updates the cluster centroid to correspond to a mean position of all normal data samples that are associated with the cluster, and increases the cluster counter by one for each normal data sample that is associated with the cluster. The wireless device transmits the compressed data to the network node. | 2022-02-17 |
20220051140 | MODEL CREATION METHOD, MODEL CREATION APPARATUS, AND PROGRAM - A model creation apparatus includes a selector configured to select a model based on output results obtained by inputting pieces of learning data to registered models, a learning unit configured to create a new model by inputting the pieces of learning data to the selected model and performing machine learning, and a registration unit configured to register the created new model such that the new model is associated with the selected model. | 2022-02-17 |
20220051141 | Material Characterization System and Method - A method, apparatus, system, and computer program product for estimating material properties. Training data comprising results of testing samples for a set of materials over a range of loads applied to the samples is identified by a computer system. A machine learning model is trained by the computer system to output the material properties for materials in structures using the training data. | 2022-02-17 |
20220051142 | RUNTIME ESTIMATION FOR MACHINE LEARNING TASKS - Techniques for estimating runtimes of one or more machine learning tasks are provided. For example, one or more embodiments described herein can regard a system that can comprise a memory that stores computer executable components. The system can also comprise a processor, operably coupled to the memory, and that can execute the computer executable components stored in the memory. The computer executable components can comprise an extraction component that can extract a parameter from a machine learning task. The parameter can define a performance characteristic of the machine learning task. Also, the computer executable components can comprise a model component that can generate a model based on the parameter. Further, the computer executable components can comprise an estimation component that can generate an estimated runtime of the machine learning task based on the model. The estimated runtime can define a period of time beginning at an initiation of the machine learning task and ending at a completion of the machine learning task. | 2022-02-17 |
20220051143 | MACHINE LEARNING SYSTEM - Methods, systems, and computer program products are included for providing a predicted outcome to a user interface. An exemplary method includes receiving, from a user interface, a plurality of identifiers that identify objects. At the user interface, a target success function is selected corresponding to the plurality of identifiers. The target success function is mapped to at least one attribute of one or more attributes of the objects. The at least one attribute of the objects and one or more other attributes are queried. Data values are retrieved corresponding to the queried at least one attribute and the one or more other attributes. Based on the data values, an outcome of the target success function is predicted. The predicted outcome is provided to the user interface. | 2022-02-17 |
20220051144 | Machine Learning and Security Classification of User Accounts - Machine learning techniques are used in combination with graph data structures to perform automated classification of accounts. Graphs may be constructed using a seed node and then expanded outward to second-degree nodes and third-degree nodes that are connected to a seed user account node via direct interaction between the accounts. Characterization information regarding the interaction between accounts can be stored in the graph (e.g., quantity of interactions, types of interactions) as well as other metrics and metadata. A classifier, using random forest or another technique, may be trained using a number of different graphs that can then be used to reach a determination as to whether a user account falls into one particular category or another. These techniques can identify accounts that may be violating terms of service, committing a security violation, and/or performing illegal actions in a way that is not ascertainable from human analysis. | 2022-02-17 |
20220051145 | MACHINE LEARNING BASED ACTIVITY DETECTION UTILIZING RECONSTRUCTED 3D ARM POSTURES - A method comprises obtaining data from an inertial measurement unit of a user, generating three-dimensional (3D) arm pose estimates from the obtained data, applying the generated 3D arm pose estimates to a machine learning system trained to recognize temporal-spatial patterns of one or more designated activities, and obtaining at least one classification output from the machine learning system. The machine learning system illustratively comprises at least one support vector machine (SVM) model. Applying the generated 3D arm pose estimates to the machine learning system illustratively comprises extracting possible intake gestures of the generated 3D arm pose estimates into respective segments, resampling each of at least a subset of the extracted segments, and utilizing the SVM model to classify whether or not each of one or more of the extracted and resampled segments comprises an intake gesture. | 2022-02-17 |
20220051146 | NON-ITERATIVE FEDERATED LEARNING - Techniques for non-iterative federated learning include receiving local models from agents, generating synthetic datasets for the local models, and producing outputs using the local models and the synthetic datasets. A global model is trained based on the synthetic datasets and the outputs. | 2022-02-17 |
20220051147 | INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER READABLE MEDIUM STORING INFORMATION PROCESSING PROGRAM - An information processing apparatus includes: a processor configured to acquire a free time occurring since a point in time at which a user starts moving toward a destination point until a target job is started at the destination point; and perform control of presenting an available space for the free time in a case where the free time is equal to or longer than a predetermined time. | 2022-02-17 |
20220051148 | PROCESS MANAGEMENT SUPPORT SYSTEM, PROCESS MANAGEMENT SUPPORT METHOD, AND PROCESS MANAGEMENT SUPPORT PROGRAM - Provided is a test facility management system that can evaluate, in a software development process requiring use of test facilities, progress of a process caused by increasing or decreasing a count of the test facilities. The test facility management system can include: a project progress forecast unit that stores a process information database, a facility reservation information database, a process progress history information database, and facility count proposed change information, calculates facility usage remaining time period for the software development process based on process progress history information, specifies a time range during which facilities of the count of proposed change are available, and forecasts, based on the available time range, the progress of the software development process when work for the facility usage remaining time period is carried out by the facilities of the count of the proposed change; and a user interface that outputs the forecasted progress information. | 2022-02-17 |