26th week of 2022 patent applcation highlights part 55 |
Patent application number | Title | Published |
20220207325 | VEHICULAR DRIVING ASSIST SYSTEM WITH ENHANCED DATA PROCESSING - A vehicular driving assistance system includes an exterior viewing camera disposed at a vehicle and viewing exterior of the vehicle. Image data captured by the camera is provided to and processed at an electronic control unit (ECU). The ECU performs processing tasks for multiple vehicle systems. The vehicular driving assistance system is operable to wirelessly upload captured image data to the cloud for processing at a remote processor. Processing tasks with a higher priority are determined at the ECU to be higher priority tasks. Responsive to determination at the ECU of a higher priority task, the vehicular driving assistance system (i) processes captured image data at the ECU for the higher priority task and (ii) uploads captured image data to the cloud for processing at the remote processor of processing at the remote processor. | 2022-06-30 |
20220207326 | ANOMALY DETECTION, DATA PREDICTION, AND GENERATION OF HUMAN-INTERPRETABLE EXPLANATIONS OF ANOMALIES - This disclosure relates to identifying anomalies in, predicting data points for, and determining a feature's importance to input time series data and outputs from the data. An example system is configured to perform operations including obtaining, by an autoencoder, time series data including multiple sequences of data points, encoding, by an encoder of the autoencoder, the obtained time series data into encoded data, decoding, by a decoder of the autoencoder, the encoded data into decoded data, reconstructing time series data from the decoded data, determining a reconstruction error based on the reconstructed time series data and the obtained time series data, identifying an anomaly based on the reconstruction error. The system is also configured to predict one or more data points from the encoded data and determine a contribution (SHAP value) of a feature to the obtained time series data that is associated with a plurality of features. | 2022-06-30 |
20220207327 | METHOD FOR DIVIDING PROCESSING CAPABILITIES OF ARTIFICIAL INTELLIGENCE BETWEEN DEVICES AND SERVERS IN NETWORK ENVIRONMENT - According to the present invention, a distributed convolution processing system in a network environment includes: a plurality of devices and servers connected on a communication network and receiving video signals or audio signals, in which the each device has a convolution means that preprocesses a matrix multiplication and a matrix sum, converts calculated feature map (FM) and convolution network (CNN) structure information, and a weighting parameter (WP) into packets, and transfers the packets to the server, and the server performs comprehensive learning and an inference computation by using the feature map (FM) and the weighting parameter which are convolution calculation results preprocessed in the distributed packets transferred from the each device, and performs learning by repeating and updating a process of transferring each of updated parameters for each neural network to the each device again. The distributed convolution processing system in a network environment according to the present invention has an advantage of reducing computation loads of the server by directly performing the distributed convolution computations in the device. | 2022-06-30 |
20220207328 | CONTROL DEVICE - To provide a control device that causes a neural network to perform learning without any effects of dead time even for a dead-time system and that has the capability of improving transient characteristics for a command input. A control device includes a feedback controller configured to control a control target including a dead-time component, a reference model unit including a dead-time component and configured to output a desired response waveform for an input. A learning based controller is configured to perform learning in a manner that a change in an output from the learning based controller minimizes an error between an output of the control target and an output of the reference model unit or causes the error to be a predetermined threshold or smaller, the output from the learning based controller being added to an output of the feedback controller and input to the control target. | 2022-06-30 |
20220207329 | COMPRESSING IMAGE-TO-IMAGE MODELS - Systems and methods herein describe an image compression system. The image compression system generates a first generative adversarial network (GAN), identifies a threshold, based on the threshold, generates a second GAN by pruning channels of the first GAN, trains the second GAN using similarity-based knowledge distillation from the first GAN, and stores the trained second GAN. | 2022-06-30 |
20220207330 | OPERATIONAL NEURAL NETWORKS AND SELF-ORGANIZED OPERATIONAL NEURAL NETWORKS WITH GENERATIVE NEURONS - Systems, methods, apparatuses, and computer program products for neural networks. In accordance with some example embodiments, an operational neuron model may comprise an artificial neuron comprising a composite nodal operator, a pool-operator, and an activation function operator. The nodal operator may comprise a linear function or non-linear function. In accordance with certain example embodiments, a generative neuron model may include a composite nodal-operator generated during the training using Taylor polynomial approximation without restrictions. In accordance with various example embodiments, a self-organized operational neural network (Self-ONN) may include one or more layers of generative neurons. | 2022-06-30 |
20220207331 | APPARATUS FOR MACHINE LEARNING-BASED VISUAL EQUIPMENT SELECTION - The present disclosure relates to determining visual equipment for a patient or user. In an embodiment, a machine learning-based approach considers the face of a user in context of a database of labeled faces and visual equipment, each of the labeled images reflected the aesthetic value of a proposed visual equipment relative to the face of the patient or user. | 2022-06-30 |
20220207332 | SCALABLE NEURAL NETWORK ACCELERATOR ARCHITECTURE - A scalable neural network accelerator may include a first circuit for selecting a sub array of an array of registers, wherein the sub array comprises LH rows of registers and LW columns of registers, and wherein LH and RH are integers. The accelerator may also include a register for storing a value that determines LH. In addition, the accelerator may include a first load circuit for loading data received from the memory bus into registers of the sub array. | 2022-06-30 |
20220207333 | IMAGE PROCESSING CONTROLLER, IMAGE PROCESSING METHOD AND DISPLAY DEVICE - An image processing controller for a display device includes a fixed-point circuitry and a plurality of neural network blocks cascaded sequentially. The fixed-point circuitry is electrically connected to each neural network block, and configured to receive a feature signal corresponding to an output feature map about the display device from each neural network block, perform fixed-point processing on the feature signal to acquire fixed-point data within a design accuracy range, and input the fixed-point data as an input feature map to a next-level neural network block. | 2022-06-30 |
20220207334 | NEURAL NETWORK DEVICE INCLUDING CONVOLUTION SRAM AND DIAGONAL ACCUMULATION SRAM - A neural network device including a convolution static random access memory (SRAM) configured to output a first operation value and a second operation value 1. An accumulation peripheral operator configured to perform an accumulation peripheral operation on the first and the second operation values, a multiplexer array configured to select and output an output value according to a selection signal, a diagonal accumulation SRAM configured to perform a bitwise accumulation of variable weight values and a spatial-wise accumulation operation on an input, a diagonal movement logic, and an addition array operator configured to perform an addition operation of output values of the diagonal movement logic subsequent to a shift operation, the multiplexer array selects any one of an output value of the accumulation peripheral operator and an output value of the addition array operator according to the selection signal and outputs the selected output value to the diagonal accumulation SRAM. | 2022-06-30 |
20220207335 | NEURAL NETWORK PROCESSING DEVICE - A neural network processing device includes: a convolution part which receives an input signal and a learning completion weight parameter, performs a convolution operation on the input signal and the learning completion weight parameter, and outputs a convolution signal that is a result value of the convolution operation; a batch adjustment which receives the convolution signal and a learning completion normalization parameter, and outputs an adjustment signal obtained by adjusting an output deviation of the convolution signal; and an activation part which converts the adjustment signal into an output signal based on an activation function. | 2022-06-30 |
20220207336 | NPU FOR GENERATING KERNEL OF ARTIFICIAL NEURAL NETWORK MODEL AND METHOD THEREOF - A neural processing unit (NPU), a method for driving an artificial neural network (ANN) model, and an ANN driving apparatus are provided. The NPU includes a semiconductor circuit that includes at least one processing element (PE) configured to process an operation of an artificial neural network (ANN) model; and at least one memory configurable to store a first kernel and a first kernel filter. The NPU is configured to generate a first modulation kernel based on the first kernel and the first kernel filter and to generate second modulation kernel based on the first kernel and a second kernel filter generated by applying a mathematical function to the first kernel filter. Power consumption and memory read time are both reduced by decreasing the data size of a kernel read from a separate memory to an artificial neural network processor and/or by decreasing the number of memory read requests. | 2022-06-30 |
20220207337 | METHOD FOR ARTIFICIAL NEURAL NETWORK AND NEURAL PROCESSING UNIT - A method performs a plurality of operations on an artificial neural network (ANN). The plurality of operations includes storing in at least one memory a set of weights, at least a portion of a first batch channel of a plurality of batch channels, and at least a portion of a second batch channel of the plurality of batch channels; and calculating the at least a portion of the first batch channel and the at least a portion of the second batch channel by the set of weights. A batch mode, configured to process a plurality of input channels, can determine the operation sequence in which the on-chip memory and/or internal memory stores and computes the parameters of the ANN. Even if the number of input channels increases, processing may be performed with one neural processing unit including a memory configured in consideration of a plurality of input channels. | 2022-06-30 |
20220207338 | NEURON SIMULATION CIRCUIT AND NEURAL NETWORK APPARATUS - A neuron simulation circuit and a neural network apparatus. The neuron simulation circuit includes an operational amplifier, a first resistive device and a second resistive device. The operational amplifier includes a first input terminal, a second input terminal, and an output terminal. The first resistive device is connected between the first input terminal or the second input terminal of the operational amplifier and the output terminal of the operational amplifier. The second resistive device is connected between the output terminal of the operational amplifier and an output terminal of the neuron simulation circuit. The second resistive device includes a threshold switching memristor, and a first terminal of the threshold switching memristor is electrically connected with the output terminal of the neuron simulation circuit. At least one of the first resistive device and the second resistive device includes a dynamic memristor. | 2022-06-30 |
20220207339 | ASSIGNMENT DEVICE, METHOD, AND PROGRAM - The determining unit | 2022-06-30 |
20220207340 | SYNAPSE-MIMETIC DEVICE CAPABLE OF NEURAL NETWORK TRAINING - A synapse-mimetic device includes: a capacitor; a first transistor which connects a first power supply to a first end of the capacitor in response to a first control signal; a second transistor which connects a second power supply to a second end of the capacitor in response to a second control signal; a third transistor which connects the first power supply to the second end of the capacitor in response to a third control signal; a fourth transistor which connects the second power supply to the first end of the capacitor in response to a fourth control signal; and a fifth transistor which provides, to an output line, a current determined by the voltage of the first end of the capacitor, the voltage of the input line, and the voltage of the output line. | 2022-06-30 |
20220207341 | NEURAL NETWORK PROCESSOR AND CONTROL METHOD THEREFOR - A neural network processor and a control method are provided. The neural network processor includes a neural network processor cluster formed by multiple single-core neural network processors and a peripheral module. The peripheral module includes a main control unit and a DMA module. The DMA module is used to convey a first task descriptor to the main control unit. The main control unit is used to: analyze the first task descriptor, determine, according to an analysis result, a subtask to be distributed to each selected processor; modify the first task descriptor to acquire a second task descriptor respectively corresponding to each selected processor; and distribute each second task descriptor to each corresponding selected processor, and activate each selected processor to process the corresponding subtask. The main control unit schedules and manages all of the single-core neural network processors, thereby leveraging operational performance of the neural network processor. | 2022-06-30 |
20220207342 | DATA COMPRESSION METHOD, DATA COMPRESSION SYSTEM AND OPERATION METHOD OF DEEP LEARNING ACCELERATION CHIP - A data compression method, a data compression system and an operation method of a deep learning acceleration chip are provided. The data compression method includes the following steps. A filter coefficient tensor matrix of a deep learning model is obtained. A matrix decomposition procedure is performed according to the filter coefficient tensor matrix to obtain a sparse tensor matrix and a transformation matrix, which is an orthonormal matrix. The product of the transformation matrix and the filter coefficient tensor matrix is the sparse tensor matrix. The sparse tensor matrix is compressed. The sparse tensor matrix and the transformation matrix, or the sparse tensor matrix and a restoration matrix, are stored in a memory. A convolution operation result is obtained by the deep learning acceleration chip using the sparse tensor matrix. The convolution operation result is restored by the deep learning acceleration chip using the restoration matrix. | 2022-06-30 |
20220207343 | ENTITY DISAMBIGUATION USING GRAPH NEURAL NETWORKS - Computer-implemented techniques for entity disambiguation using graph neural networks (GNNs) are provided. According to an embodiment, computer implemented method can comprise receiving, by a system operatively coupled to a processor, an unstructured text snippet comprising an unknown term. The method further comprises employing, by the system, a heterogeneous GNN trained on a knowledge graph associated with a domain of the unstructured text snippet to facilitate identifying one or more similar terms included within the knowledge graph for the unknown term. | 2022-06-30 |
20220207344 | FILTERING HIDDEN MATRIX TRAINING DNN - In one aspect, a method of training a DNN includes transmitting an input vector x through a weight matrix W and reading a resulting output vector y, transmitting an error signal δ, transmitting the input vector x with the error signal δ through conductive row wires of a matrix A, and transmitting an input vector e | 2022-06-30 |
20220207345 | TENSOR CONTROLLER ARCHITECTURE - A machine-learning accelerator system, comprising: a plurality of controllers each configured to traverse a feature map with n-dimensions according to instructions that specify, for each of the n-dimensions, a respective traversal size, wherein each controller comprises: a counter stack comprising counters each associated with a respective dimension of the n-dimensions of the feature map, wherein each counter is configured to increment a respective count from a respective initial value to the respective traversal size associated with the respective dimension associated with that counter; a plurality of address generators each configured to use the respective counts of the counters to generate at least one memory address at which a portion of the feature map is stored; and a dependency controller computing module configured to (1) track conditional statuses for incrementing the counters and (2) allow or disallow each of the counters to increment based on the conditional statuses. | 2022-06-30 |
20220207346 | DATA PROCESSING METHOD AND DEVICE USED IN NEURAL NETWORK - A data processing method used in neural network computing is provided. During a training phase of a neural network model, a feedforward procedure based on a calibration data is performed to obtain distribution information of a feedforward result for at least one layer of the neural network model. During the training phase of the neural network model, a bit upper bound of a partial sum is generated based on the distribution information of the feedforward result. During an inference phase of the neural network model, a bit-number reducing process is performed on an original operation result of an input data and a weight for the neural network model according to the bit upper bound of the partial sum to obtain an adjusted operation result. | 2022-06-30 |
20220207347 | SPLIT-NET CONFIGURATION FOR PREDICTIVE MODELING - A machine learning system that uses a split net configuration to incorporate arbitrary constraints receives a set of input data and a set of functional constraints. The machine learning system jointly optimizes a deep learning model by using the set of input data and a wide learning model by using the set of constraints. The deep learning model includes an input layer, an output layer, and an intermediate layer between the input layer and the output layer. The wide learning model includes an input layer and an output layer but no intermediate layer. The machine learning system provides a machine learning model comprising the optimized deep learning model and the optimized wide learning model. | 2022-06-30 |
20220207348 | REAL-TIME NEURAL NETWORK RETRAINING - A system comprising a computer including a processor and a memory, the memory including instructions such that the processor is programmed to: determine whether a difference between a friction coefficient label and a determined friction coefficient corresponding to an image depicting a surface is greater than a label threshold; modify the determined friction coefficient to equal the friction coefficient label when the difference is greater than the label threshold; and retrain a neural network using the image and the friction coefficient label. | 2022-06-30 |
20220207349 | Automated Creation of Machine-learning Modeling Pipelines - A computer-implemented method of generating a machine learning model pipeline (“pipeline”) for a task, where the pipeline includes a machine learning model and at least one feature. A machine learning task including a data set and a set of first tags related to the task are received from a user. It is determined whether a database stores a first machine learning model pipeline correlated in the database with a second tag matching at least one first tag received from the user. Upon determining that the database stores the first machine learning model pipeline, the first machine learning model pipeline is retrieved, the retrieved first machine learning model pipeline is run, and the machine learning task is responded to. Pipelines may also be created based on stored pipelines correlated with a tag related to a tag in the task, or from received feature generator(s) and models. | 2022-06-30 |
20220207350 | IDENTIFYING RELATED MESSAGES IN A NATURAL LANGUAGE INTERACTION - Using a training portion of a dataset, a set of component parameters comprising parameters of a component of an object detection model are trained. Using the trained set of component parameters, a set of backbone component weights comprising weights of component types in a backbone portion of the object detection model are trained. Using the trained set of component parameters, a set of backbone link weights comprising weights of links within the backbone portion are trained. Using the trained set of component parameters, a set of head component weights comprising weights of component types in a head portion of the object detection model are trained. Using the trained sets of component parameters, backbone component weights, backbone link weights, and head component weights, a trained object detection model is configured and trained to perform object detection. | 2022-06-30 |
20220207351 | SEMICONDUCTOR DESIGN OPTIMIZATION USING AT LEAST ONE NEURAL NETWORK - According to an aspect, a semiconductor design system includes at least one neural network including a first predictive model and a second predictive model, where the first predictive model is configured to predict a first characteristic of a semiconductor device, and the second predictive model is configured to predict a second characteristic of the semiconductor device. The semiconductor design system includes an optimizer configured to use the neural network to generate a design model based on a set of input parameters, where the design model includes a set of design parameters for the semiconductor device such that the first characteristic and the second characteristic achieve respective threshold conditions. | 2022-06-30 |
20220207352 | METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS FOR COUNTERFACTUAL EXPLANATIONS OF COMPUTER ALERTS THAT ARE AUTOMATICALLY DETECTED BY A MACHINE LEARNING ALGORITHM - Methods and systems are described herein for generating recommendations for counterfactual explanations to computer alerts that are automatically detected by a machine learning algorithm. The methods and systems use an artificial neural network architecture that trains a hybrid classifier and autoencoder. For example, one model (or artificial neural network), which is a classifier, is trained to make predictions. A second model (or artificial neural network), which is an autoencoder, is trained to reconstruct its inputs. As the second model is trained to reconstruct its inputs means, the second model is implicitly trained to determine what in-sample data looks like. By combining these networks and train them jointly, the system generates predictions (e.g., counterfactual explanations) that are in-sample. | 2022-06-30 |
20220207353 | METHODS AND SYSTEMS FOR GENERATING RECOMMENDATIONS FOR COUNTERFACTUAL EXPLANATIONS OF COMPUTER ALERTS THAT ARE AUTOMATICALLY DETECTED BY A MACHINE LEARNING ALGORITHM - Methods and systems are described herein for generating recommendations for counterfactual explanations to computer alerts that are automatically detected by a machine learning algorithm. The methods and systems use an artificial neural network architecture that trains a hybrid classifier and autoencoder. For example, one model (or artificial neural network), which is a classifier, is trained to make predictions. A second model (or artificial neural network), which is an autoencoder, is trained to reconstruct its inputs. As the second model is trained to reconstruct its inputs means, the second model is implicitly trained to determine what in-sample data looks like. By combining these networks and train them jointly, the system generates predictions (e.g., counterfactual explanations) that are in-sample. | 2022-06-30 |
20220207354 | ANALOG CIRCUITS FOR IMPLEMENTING BRAIN EMULATION NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for implementing brain emulation neural networks using analog circuits. One of the methods includes obtaining data defining a synaptic connectivity graph representing synaptic connectivity between neurons in a brain of a biological organism, wherein the synaptic connectivity graph comprises a plurality of nodes and edges, wherein each edge connects a pair of nodes, each node corresponds to a respective neuron in the brain of the biological organism, and each edge connecting a pair of nodes in the synaptic connectivity graph corresponds to a synaptic connection between a pair of neurons; determining an artificial neural network architecture corresponding to the synaptic connectivity graph; and generating, from the artificial neural network architecture, a design of an analog circuit that is configured to execute a plurality of operations of an artificial neural network having the artificial neural network architecture. | 2022-06-30 |
20220207355 | GENERATIVE ADVERSARIAL NETWORK MANIPULATED IMAGE EFFECTS - Systems and methods herein describe an image manipulation system for generating modified images using a generative adversarial network. The image manipulation system accesses a pre-trained generative adversarial network (GAN), fine-tunes the pre-trained GAN by training a portion of existing neural network layers of the pre-trained GAN and newly added layers of the pre-trained GAN on a secondary image domain, adjusts the weights of the fine-tuned GAN using the weights of the pre-trained GAN, and stores the fine-tuned GAN. An image transformation system uses the generated modified images to train a subsequent neural network, which can access a face from a client device and transform it to a domain of images used for GAN fine-tuning. | 2022-06-30 |
20220207356 | NEURAL NETWORK PROCESSING UNIT WITH NETWORK PROCESSOR AND CONVOLUTION PROCESSOR - A neural network processing unit for a device according to the present invention includes an AV input matcher that receives a video signal or audio signal input from the outside; a convolution computation controller which receives and buffers the video signal or audio signal from the AV input matcher, divides the video signal or audio signal into overlapping video segments according to a size of a convolution kernel, and transfers the divided data; a convolution computation array which consists of a plurality of arrays, performs independent convolution computations for each divided video block by receiving the divided data, and transfers the results; an active pass controller which receives feature map (FM) information as convolution computation results from the plurality of convolution computation arrays to transfer the FM information to the convolution computation controller again for subsequent convolution computations or perform activation determination and pooling computation on a neural network structure; and a network processor for generating IP packets and processing TCP/IP or UDP/IP packets to transfer the FM as the convolution computation result to a server through a network and a control processor for installing and operating software for controlling configuration blocks. According to the present invention, the neural network processing unit for the device has an effect of reducing computation loads of the server by directly performing the distributed convolution operations in the device. | 2022-06-30 |
20220207357 | METHOD OF SHORT-TERM LOAD FORECASTING VIA ACTIVE DEEP MULTI-TASK LEARNING, AND AN APPARATUS FOR THE SAME - A method of load forecasting using multi-task deep learning includes obtaining references data corresponding to commodity consuming objects, clustering the commodity consuming objects into clusters based on the obtained reference commodity consumption data; obtaining cluster models based on: reference commodity consumption data, reference environmental data, and reference calendar data; inputting, into the cluster models, present data corresponding to the commodity consuming objects; and predicting, based on an output of the cluster models, a future commodity consumption for the commodity consuming objects. The cluster models include multi-task learning processes having joint loss functions. | 2022-06-30 |
20220207358 | MODEL OPTIMIZATION IN INFRASTRUCTURE PROCESSING UNIT (IPU) - An Infrastructure Processing Unit (IPU), including: a model optimization processor configured to optimize an artificial intelligence (Al) model for an accelerator managed by the IPU, and deploy the optimized Al model to the accelerator for execution of an inference; and a local memory configured to store data related to the Al model optimization. | 2022-06-30 |
20220207359 | METHOD AND APPARATUS FOR DYNAMIC NORMALIZATION AND RELAY IN A NEURAL NETWORK - Embodiments are generally directed to methods and apparatuses for dynamic normalization and relay in a neural network. An embodiment of an apparatus for dynamic normalization and relay in a neural network including a hyper normalization layer comprises: a compute engine to: generate a hidden state and a cell state for the hyper normalization layer based on an input feature map for the hyper normalization layer as well as a previous hidden state and a previous cell state; and normalize the input feature map in the hyper normalization layer with the hidden state and the cell state for the hyper normalization layer. | 2022-06-30 |
20220207360 | COMPUTER SYSTEM FOR MULTI-SOURCE DOMAIN ADAPTATIVE TRAINING BASED ON SINGLE NEURAL NETWORK WITHOUT OVERFITTING AND METHOD THEREOF - Various embodiments relate to a computer system for multi-source domain adaptative training based on a single neural network without overfitting and a method thereof. The various embodiments may configured to regularize data sets of a plurality of domains, extract information shared between the regularized data sets, and implement a training model by performing training based on the extracted information. | 2022-06-30 |
20220207361 | NEURAL NETWORK MODEL QUANTIZATION METHOD AND APPARATUS - A neural network model quantization method and apparatus is provided. The neural network model quantization method includes receiving a neural network model, calculating a quantization parameter corresponding to an operator of the neural network model to be quantized based on bisection approximation, and quantizing the operator to be quantized based on the quantization parameter and obtaining a neural network model having the quantized operator. | 2022-06-30 |
20220207362 | System and Method For Multi-Task Learning Through Spatial Variable Embeddings - A general prediction model is based on an observer traveling around a continuous space, measuring values at some locations, and predicting them at others. The observer is completely agnostic about any particular task being solved; it cares only about measurement locations and their values. A machine learning framework in which seemingly unrelated tasks can be solved by a single model is proposed, whereby input and output variables are embedded into a shared space. The approach is shown to (1) recover intuitive locations of variables in space and time, (2) exploit regularities across related datasets with completely disjoint input and output spaces, and (3) exploit regularities across seemingly unrelated tasks, outperforming task-specific single-task models and multi-task learning alternatives. | 2022-06-30 |
20220207363 | METHOD FOR TRAINING NEURAL NETWORK FOR DRONE BASED OBJECT DETECTION - The present invention relates to a method for training a neural network for object detection. The method includes receiving a detection target image, splitting the detection target image into unit images having a predetermined size, defining an output of the neural network for the split unit images as a first label value, generating a first deformed image by deforming the unit image according to a first rule, and training the neural network by using an output of the neural network for the first deformed image and a loss of the first label value. According to the present invention, it is possible to efficiently train a neural network for detecting an object in a large screen. | 2022-06-30 |
20220207364 | FRAMEWORK FOR CODING AND DECODING LOW RANK AND DISPLACEMENT RANK-BASED LAYERS OF DEEP NEURAL NETWORKS - A method and apparatus for conveying information in a bitstream for deep neural network compression, such as in matrices representing weights, biases and non-linearities, to iteratively compress a pre-trained deep neural network by low displacement rank based approximation of the network layer weight matrices. The low displacement rank approximation allows for replacement of an original layer weight matrices of the pre-trained deep neural network as the sum of small number of structured matrices, allowing compression and low inference complexity. A decoder stage parses a bitstream for inference. | 2022-06-30 |
20220207365 | METHOD FOR CONSTRUCTING AND TRAINING A DETECTOR FOR THE PRESENCE OF ANOMALIES IN A TEMPORAL SIGNAL, ASSOCIATED METHOD AND DEVICES - The present invention describes a method for training a detector ( | 2022-06-30 |
20220207366 | Action-Actor Detection with Graph Neural Networks from Spatiotemporal Tracking Data - A computing system retrieves tracking data from a data store. The tracking data includes a plurality of frames of data for a plurality of events across a plurality of seasons. The computing system converts the tracking data into a plurality of graph-based representations. A graph neural network learns to generate an action prediction for each player in each frame of the tracking data. The computing system generates a trained graph neural network based on the learning. The computing system receives target tracking data for a target event. The target tracking data includes a plurality of target frames. The computing system converts the target tracking data to a plurality of target graph-based representations. Each graph-based representation corresponds to a target frame of the plurality of target frames. The computing system generates, via the trained graph neural network, an action prediction for each player in each target frame. | 2022-06-30 |
20220207367 | Method and Device for Classifying Data - A method of classifying data includes: training a classification model for classifying input data into at least one class, such that a first output value is generated according to a second equation in which a component corresponding to a label distribution of source data is disentangled in a first equation corresponding to the classification model; generating a second output value by applying, to the first output value, information indicating a label distribution of target data; and classifying the target data into the at least one class by using the second output value. | 2022-06-30 |
20220207368 | Embedding Normalization Method and Electronic Device Using Same - A method of training a neural network model for predicting a click-through rate (CTR) of a user in an electronic device includes normalizing an embedding vector on the basis of a feature-wise linear transformation parameter, and inputting the normalized embedding vector into a neural network layer, wherein the feature-wise linear transformation parameter is defined such that the same value is applied to all elements of the embedding vector. | 2022-06-30 |
20220207369 | TRAINING METHOD, STORAGE MEDIUM, AND TRAINING DEVICE - A training method of an autoencoder that performs encoding and decoding, for a computer to execute a process includes encoding input data by the autoencoder; obtaining a probability distribution of feature data obtained by encoding the input data by the autoencoder; adding a noise to the feature data; generating decoded data by decoding the feature data to which the noise is added by the autoencoder; and training the autoencoder to train the probability distribution of the feature data so that an information entropy of the probability distribution and an error between the decoded data and the input data are decreased. | 2022-06-30 |
20220207370 | INFERRING DEVICE, TRAINING DEVICE, INFERRING METHOD, AND TRAINING METHOD - An inferring device includes one or more memories and one or more processors. The one or more processors input a vector relating to an atom into a first network which extracts a feature of the atom in a latent space from the vector relating to the atom, and infer the feature of the atom in the latent space through the first network. | 2022-06-30 |
20220207371 | TRAINING METHOD, STORAGE MEDIUM, AND TRAINING DEVICE - A training method of an autoencoder that performs encoding and decoding, for a computer to execute a process includes encoding input data by the autoencoder; obtaining a probability distribution of feature data obtained by encoding the input data; generating first decoded data by decoding the feature data by the autoencoder; adding a noise to the feature data by the autoencoder; generating second decoded data by decoding the feature data to which the noise is added by the autoencoder; and training the autoencoder to train the probability distribution of the feature data so that a first error between the first decoded data and the input data, a second error between the first decoded data and the second decoded data, and an information entropy of the probability distribution are decreased. | 2022-06-30 |
20220207372 | PATTERN-BASED NEURAL NETWORK PRUNING - An example method for pattern-based pruning of neural networks comprises: receiving, by a processing device, a plurality of feature maps produced by an input layer of a neural network; for each feature map of the plurality of feature maps, selecting, from a predetermined set of pruning masks, a pruning mask to be applied to the feature map; pruning the neural network by applying, to each feature map of the plurality of feature maps, a respective selected pruning mask to the feature map; and training the pruned neural network. | 2022-06-30 |
20220207373 | COMPUTING DEVICE COMPENSATED FOR ACCURACY REDUCTION CAUSED BY PRUNING AND OPERATION METHOD THEREOF - An operation method of a computing device includes selecting first data on which a first pruning is to be performed, down-scaling a first plurality of weights included in a first output channel associated with the first data, up-scaling a second plurality of weights used to generate second data to be multiplied by a weight having a major value from among the first plurality of weights included in the first output channel, calculating the second data based on the up-scaled second plurality of weights, and performing the first pruning. | 2022-06-30 |
20220207374 | MIXED-GRANULARITY-BASED JOINT SPARSE METHOD FOR NEURAL NETWORK - Disclosed in the present invention is a mixed-granularity-based joint sparse method for a neural network. The joint sparse method comprises independent vector-wise fine-grained sparsity and block-wise coarse-grained sparsity; and a final pruning mask is obtained by performing a bitwise logic AND operation on pruning masks independently generated by two sparse methods, and then a weight matrix of the neural network after sparsity is obtained. The joint sparsity of the present invention always obtains the reasoning speed between a block sparsity mode and a balanced sparsity mode without considering the vector row size of the vector-wise fine-grained sparsity and the vector block size of the block-wise coarse-grained sparsity. Pruning for a convolutional layer and a fully-connected layer of a neural network has the advantages of variable sparse granularity, acceleration of general hardware reasoning and high accuracy of model reasoning. | 2022-06-30 |
20220207375 | CONVOLUTIONAL NEURAL NETWORK TUNING SYSTEMS AND METHODS - Systems and methods are provided that tune a convolutional neural network (CNN) to increase both its accuracy and computational efficiency. In some examples, a computing device storing the CNN includes a CNN tuner that is a hardware and/or software component that is configured to execute a tuning process on the CNN. When executing according to this configuration, the CNN tuner iteratively processes the CNN layer by layer to compress and prune selected layers. In so doing, the CNN tuner identifies and removes links and neurons that are superfluous or detrimental to the accuracy of the CNN. | 2022-06-30 |
20220207376 | MATRIX INVERSION USING ANALOG RESISTIVE CROSSBAR ARRAY HARDWARE - Matrix inversion systems and methods are implemented using an analog resistive processing unit (RPU) array for hardware accelerated computing. A request is received from an application to compute an inverse matrix of a given matrix, and a matrix inversion process is performed in response to the received request. The matrix inversion process includes storing a first estimated inverse matrix of the given matrix in an array RPU cells, performing a first iterative process on the first estimated inverse matrix stored in the array of RPU cells to converge the first estimated inverse matrix to a second estimated inverse matrix of the given matrix, and reading the second estimated inverse matrix from the array of RPU cells upon completion of the first iterative process. An inverse matrix is returned to the application, wherein the returned inverse matrix is based, at least in part, on the second estimated inverse matrix. | 2022-06-30 |
20220207377 | METHODS AND APPARATUSES FOR TRAINING NEURAL NETWORKS AND DETECTING CORRELATED OBJECTS - Methods and apparatus for training neural networks and detecting correlated objects are provided. In one aspect, a method of training a neural network includes: detecting a first-class object and second-class objects in an image; generating at least one candidate object group based on the detected first-class object and second-class objects, each candidate object group including at least one first-class object and at least two second-class objects; for each candidate object group, determining a matching degree between the first-class object and each second-class object in the candidate object group based on a neural network; determining a group correlation loss of the candidate object group based on the determined matching degree, the group correlation loss being positively correlated with a matching degree between the first-class object and a non-correlated second-class object; and adjusting network parameters of the neural network based on the group correlation loss. | 2022-06-30 |
20220207378 | SUPER NEURONS WITH NON-LOCALIZED KERNEL OPERATIONS - Systems, methods, apparatuses, and computer program products for a machine learning paradigm. In accordance with some example embodiments, a self-organizing network may include one or more super neuron models with non-localized kernel operations. A set of additional parameters may define a spatial bias as the deviation of a kernel from the pixel location towards x- and y-direction for a k | 2022-06-30 |
20220207379 | TEMPORAL KNOWLEDGE GRAPH COMPLETION METHOD AND APPARATUS BASED ON RECURSION - The disclosure provides a temporal knowledge graph completion method, including: obtaining a temporal knowledge graph; obtaining a corresponding static knowledge graph, and obtaining an updated feature through performing embedding learning on a feature of the static knowledge graph and the static knowledge graph; starting from the sub knowledge graph with the first timestamp, obtaining, based on recursion, an updated embedding learning parameter and an updated feature by taking a sub knowledge graph with a current timestamp, and a feature and an embedding learning parameter of the sub knowledge graph with the current timestamp as input of embedding learning; determining the updated embedding learning parameter and the updated feature as an embedding learning parameter and a feature of a sub knowledge graph with an adjacent next timestamp, until all the sub knowledge graph sequences in the temporal knowledge graph are traversed; and performing fact prediction on each sub knowledge graph. | 2022-06-30 |
20220207380 | APPARATUS AND METHOD FOR VALIDATING DATASET BASED ON FEATURE COVERAGE - A dataset validating method based on a feature coverage according to an exemplary embodiment of the present disclosure includes extracting a feature of a first dataset including a plurality of data using a classification model trained for a predetermined second dataset; clustering labels of the first dataset according to the extracted feature; and validating a coverage of a partial dataset which is a part selected from the first dataset based on the clustering result. | 2022-06-30 |
20220207381 | COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN VECTOR ESTIMATING PROGRAM, APPARATUS FOR ESTIMATING VECTOR, AND METHOD FOR ESTIMATING VECTOR - A non-transitory computer-readable recording medium has stored therein a vector estimating program that causes a computer to execute a process including: obtaining a first vector and second entity information, the first vector being generated by using a first model with reference to graph structure data representing a relationship of a first entity group and being obtained by using first entity information related to the first entity group, the second entity information obtained by updating the first entity information and related to the first entity group and a second entity not being included in the first entity group; generating a second model based on the first vector and information on the first entity group included in the second entity information, the second model being used for obtaining vector data from the second entity information; and estimating a second vector corresponding to the second entity by using the generated second model. | 2022-06-30 |
20220207382 | APPARATUS AND METHOD FOR REFINING DATA AND IMPROVING PERFORMANCE OF BEHAVIOR RECOGNITION MODEL BY REFLECTING TIME-SERIES CHARACTERISTICS OF BEHAVIOR - Provided is an apparatus for refining data and improving the performance of a behavior recognition model by reflecting time-series characteristics of a behavior. The apparatus includes: a data pre-processing unit configured to receive training data and real-time data as input, identify a missing value of sensor data, and interpolate the sensor data; a behavior recognition unit configured to, through a behavior recognition model, generate a behavior recognition classification result for the preprocessed real-time data; a data refinement unit configured to correct the behavior recognition classification result to generate a refined dataset; a learning model update unit configured to analyze a similarity of the refined dataset and, based on a result of the analysis, perform learning to generate the behavior recognition model; and an information output unit configured to express a corrected behavior recognition result to a user. | 2022-06-30 |
20220207383 | FAULT PROPAGATION CONDITION EXTRACTION METHOD AND APPARATUS AND STORAGE MEDIUM - A network device obtains, at different time, a plurality of event-object connection graphs corresponding to a communications network; determines a plurality of subgraphs based on the plurality of event-object connection graphs; updates an object in each of the plurality of subgraphs to a corresponding object type based on a correspondence between an object and an object type, to obtain a plurality of updated subgraphs; and determines a fault propagation condition based on the plurality of updated subgraphs, where the fault propagation condition is used to indicate a path through which a fault is propagated in the communications network. | 2022-06-30 |
20220207384 | Extracting Facts from Unstructured Text - A system, computer program product, and method are provided for extraction of factual data from unstructured natural language (NL) text. A detection model is applied to convert unstructured NL text in a first language to annotated NL text. The detection model identifies two or more mentions from the unstructured NL text and a logical position of the mentions. The detection model further identifies a sequential position for each of the mentions and attaches a sequential position identifier. A pattern of rules corresponding with the annotated NL text is identified and applied to the annotated NL text, and one or more facts embedded within the annotated NL text are extracted and converted into structured data. | 2022-06-30 |
20220207385 | SELF LEARNING DATA LOADING OPTIMIZATION FOR A RULE ENGINE - Methods and systems for using machine learning to automatically determine a data loading configuration for a computer-based rule engine are presented. The computer-based rule engine is configured to use rules to evaluate incoming transaction requests. Data of various data types may be required by the rule engine when evaluating the incoming transaction requests. The data loading configuration specifies pre-loading data associated with at least a first data type and lazy-loading data associated with at least a second data type. Statistical data such as use rates and loading times associated with the various data types may be supplied to a machine learning module to determine a particular loading configuration for the various data types. The computer-based rule engine then loads data according to the data loading configuration when evaluating a subsequent transaction request. | 2022-06-30 |
20220207386 | BEST OUTCOME AIOPS MODELING WITH DATA CONFIDENCE FABRICS - One example method includes receiving a transaction at a digital twin that incorporates all transactions that have occurred at a site from which the transaction was received, and wherein the digital twin was created based in part on a data confidence fabric ledger, entering the transaction in the data confidence fabric ledger at the digital twin, receiving another transaction at the digital twin, wherein the another transaction has caused a problem to occur, entering the another transaction in the data confidence fabric ledger, replaying any transactions that have occurred in a defined time window that includes the another transaction, based on the replaying, identifying a state of a system where the problem occurred, and a time when the problem occurred, and determining a resolution to the problem. | 2022-06-30 |
20220207387 | Systems And Methods For Predictive Drawbridge Operation For Vehicle Navigation - Systems and methods are provided for predicting drawbridge operation based on incoming vessel information and historical drawbridge operation data, and transmitting the drawbridge operation prediction to a GPS system such as an autonomous vehicle or a mobile device application for rerouting a planned navigation route. A transceiver may receive vessel information, e.g., incoming vessel size, type, speed, position, or quantity, or estimated incoming vessel arrival time, from one or more cameras and/or a marine radio, and may receive historical drawbridge operation data based on the incoming vessel information from an online database such that the drawbridge operation prediction is based on the historical data. | 2022-06-30 |
20220207388 | AUTOMATICALLY GENERATING CONDITIONAL INSTRUCTIONS FOR RESOLVING PREDICTED SYSTEM ISSUES USING MACHINE LEARNING TECHNIQUES - Methods, apparatus, and processor-readable storage media for automatically generating conditional instructions for resolving predicted system issues using machine learning techniques are provided herein. An example computer-implemented method includes obtaining a dataset comprising configuration data for a system; identifying portions the configuration data associated with configuration changes unrelated to the resolution of the at least one system issue by processing the dataset using machine learning-based feature selection techniques; creating an updated dataset by filtering the identified portions from the dataset; grouping the configuration data within the updated dataset into two or more groups using hashing algorithms and similarity metrics; generating a hash model based on the groups of the configuration data; generating, using the hash models, a set of conditional instructions for resolving one or more predicted system issues; and performing at least one automated action based on the set of conditional instructions. | 2022-06-30 |
20220207389 | ESTIMATION RELIABILITY OF HAZARD WARNING FROM A SENSOR - Systems and methods for a model for estimation of reliability of a hazard sensor observation at a vehicle are disclosed. An example apparatus may include a hazard observation interface configured to receive one or more hazard observations from the hazard sensor of the vehicle, wherein each of the one or more hazard observations is associated with a hazard location and a hazard timestamp, a ground truth module configured to determine ground truth data from one or more weather records from a historical weather database based on the hazard location and the hazard timestamp, and a model configured to perform a comparison of the one or more hazard observations to the one or more weather records, the model trained for estimation of reliability of the hazard sensor observations based on the comparison of the one or more hazard observations to the one or more weather records. | 2022-06-30 |
20220207390 | FOCUSED AND GAMIFIED ACTIVE LEARNING FOR MACHINE LEARNING CORPORA DEVELOPMENT - The present disclosure describes techniques and systems to provide focused and gamified active learning for machine learning model development. The present disclosure describes determining an active learning algorithm with which to choose batches of content that correspond to specific categories of content to be annotated. Furthermore, the present disclosure provides that the batches of content, and particularly characteristics of the content can be identified for annotation based on ML model performance, such as an entropy of the ML model. | 2022-06-30 |
20220207391 | INTEGRATED FEATURE ENGINEERING - A feature engineering application receives a plurality of data sets from different data sources for training a model for making a prediction based on new data. The feature engineering application generates primitives based on the data sets. A primitive is to be applied to a variable in the data sets to synthesize a feature. The feature engineering application also receives a temporal parameter that specifies a temporal value for generating time-based features. After the primitives are generated and the temporal parameter is received, the feature engineering application aggregates the plurality of data entities based on primary variables in the plurality of data entities and generate an entity set based on the aggregation. The feature engineering application then synthesize features, including the time-based features, based on the entity set, at least some of the primitives, and the temporal parameter. | 2022-06-30 |
20220207392 | GENERATING SUMMARY AND NEXT ACTIONS IN REAL-TIME FOR MULTIPLE USERS FROM INTERACTION RECORDS IN NATURAL LANGUAGE - A system receives messaging, video and/or audio input streams including dialogue spoken by users at a group meeting. From these inputs, the system obtains single or multiple interaction records including natural language text memorializing content spoken by each speaker at a meeting, analyzes the content, and identifies single or multiple action item tasks in the interaction records. The system then generates summaries indicating the action item tasks for the users. From the dialogue content, the system further detects whether each action item is addressed, and whether the action item for a user has a solution, or not. The system further detects whether one action item is a precondition for resolving another action item by the user or in conjunction with another user. Using a pre-configured template, the system generates action item summaries, any associated solution, and any relationship or precondition between action items and presents the summary to a user. | 2022-06-30 |
20220207393 | METHOD OF PREDICTING SEMICONDUCTOR MATERIAL PROPERTIES AND METHOD OF TESTING SEMICONDUCTOR DEVICE USING THE SAME - Disclosed are methods of predicting semiconductor material properties and methods of testing semiconductor devices using the same. The prediction method comprises preparing a machine learning model that is trained with a training system and using the machine learning model to predict material properties of a target system. The machine learning model is represented as a function of material properties with respect to a descriptor. The descriptor is calculated from unrelaxed charge density (UCD) that is represented by summation of atomic charge density (ACD) of single atoms. | 2022-06-30 |
20220207394 | GARMENT SIZE RECOMMENDATION SYSTEM - A method implemented by a computing system comprises receiving, by the computing system and from a user device, a subject profile information that specifies a plurality of body measurements associated with one or more images of a subject that are captured by the user device. The computing device receives a selection of a garment, where the garment is associated with a category, a brand, a style, and a plurality of sizes. A sizing recommendation engine of the computing system determines, based on the plurality of body measurements, a particular size of the garment that is associated with the subject profile information. The computing system communicates the particular size to the user device. | 2022-06-30 |
20220207395 | INTERACTIVE INTERVENTION PLATFORM - This document describes a platform that processes multi-modal inputs received from multiple sensors and initiates actions that cause the user to transition to a target state. In one aspect, a method includes detecting, based on data received from sensors, a current state of a user. A set of candidate states to which the user can transition from the current state is identified based on the current state. A target state for the user is selected based on the data received from the sensors and/or the current state of the user. For each of multiple candidate states, a probability at which the user will transition from the current state to the target state through the candidate state is determined. A next state for the user is selected based on the probabilities. One or more actions are determined and initiated to transition the user from the current state to the next state. | 2022-06-30 |
20220207396 | METHOD AND SYSTEM FOR PROCESSING MULTI-REQUEST APPLICATIONS - A system receives application data to be used in requests made on behalf of an applicant to a selection of evaluator devices. The system includes a predictive model which predicts actual eligibility criteria for acceptance of a request by the evaluator devices, and is trained with a library of application data including previously evaluated requests and outcomes to the previously evaluated requests. The system compiles the application data into separate requests by synchronizing the application data and identifying a common core of data required by each selected evaluator device and compiling the common core of data along with particular requirements of individual evaluator devices. An applicant can thereby complete a multi-request application which generates requests to a plurality of evaluator devices and which avoids duplication of data storage and data transmission, and reduces effort required by the applicant. Implementations include students making applications for admission to academic institutions | 2022-06-30 |
20220207397 | Artificial Intelligence (AI) Model Evaluation Method and System, and Device - An AI model evaluation method includes: obtaining an AI model and an evaluation data set, where the evaluation data set includes a plurality of pieces of evaluation data carrying labels that are used to indicate real results corresponding to the evaluation data; classifying the evaluation data in the evaluation data set based on a data feature to obtain an evaluation data subset; and calculating inference accuracy of the AI model on the evaluation data subset to obtain an evaluation result of the AI model on data whose value of the data feature meets the condition. | 2022-06-30 |
20220207398 | SYSTEMS AND METHODS FOR MODELING MACHINE LEARNING AND DATA ANALYTICS - Systems and methods for implementing and using a data modeling and machine learning lifecycle management platform that facilitates collaboration among data engineering, development and operations teams and provides capabilities to experiment using different models in a production environment to accelerate the innovation cycle. Stored computer instructions and processors instantiate various modules of the platform. The modules include a user interface, a collector module for accessing various data sources, a workflow module for processing data received from the data sources, a training module for executing stored computer instructions to train one or more data analytics models using the processed data, a predictor module for producing predictive datasets based on the data analytics models, and a challenger module for executing multi-sample hypothesis testing of the data analytics models. | 2022-06-30 |
20220207399 | CONTINUOUSLY LEARNING, STABLE AND ROBUST ONLINE MACHINE LEARNING SYSTEM - An Online Machine Learning System (OMLS) including an Online Preprocessing Engine (OPrE) configured to (a) receive streaming data including an instance comprising a vector of inputs, the vector of inputs comprising a plurality of continuous or categorical features; (b) discretize features; (c) impute missing feature values; (d) normalize features; and (e) detect drift or change in features; an Online Feature Engineering Engine (OFEE) configured to produce features; and an Online Robust Feature Selection Engine (ORFSE) configured to evaluate and select features; an Online Machine Learning Engine (OMLE) configured to incorporate and utilize one or more machine learning algorithms or models utilizing features to generate a result, and capable of incorporating and utilizing multiple different machine learning algorithms or models, wherein each of the OMLE, the OPrE, the OFEE, and the ORFSE are continuously communicatively coupled to each other, and wherein the OMLS is configured to perform continuous online machine learning. | 2022-06-30 |
20220207400 | METHOD AND SYSTEM FOR EXTRACTION OF CAUSE-EFFECT RELATION FROM DOMAIN SPECIFIC TEXT - This disclosure relates generally to extraction of cause-effect relation from domain specific text. Cause-effect relation highlights causal relationship among various entities, concepts and processes in a domain specific text. Conventional state-of-the-art methods use named entity recognition for extraction of cause-effect (CE) relation which does not give precise results. Embodiments of the present disclosure provide a knowledge-based approach for automatic extraction of CE relations from domain specific text. The present disclosure method is a combination of an unsupervised machine learning technique to discover causal triggers and a set of high-precision linguistic rules to identify cause/effect arguments of these causal triggers. The method extracts the CE relation in the form of a triplet comprising a causal trigger, a cause phrase and an effect phrase identified from the domain specific text. The disclosed method is used for extracting CE relations in biomedical text. | 2022-06-30 |
20220207401 | OPTIMIZATION DEVICE, OPTIMIZATION METHOD, AND PROGRAM - A plurality of parameter values can be simultaneously selected to achieve faster optimization of the parameters. | 2022-06-30 |
20220207402 | METHOD AND APPARATUS FOR PERFORMING A QUANTUM COMPUTATION - A method of performing a quantum computation includes providing a quantum system comprising a plurality of qubits. The method includes encoding a computational problem into a problem Hamiltonian of the quantum system. The problem Hamiltonian is a single-body Hamiltonian comprising a plurality of adjustable parameters. The encoding includes determining, from the computational problem, a problem-encoding configuration for the plurality of adjustable parameters. The method includes performing N rounds of operations, wherein N≥2. Each round of the N rounds of operations includes determining a sequence of unitary operators, wherein each unitary operator in the sequence is a unitary operator being a unitary time evolution of the problem Hamiltonian, wherein the plurality of adjustable parameters of the problem Hamiltonian are in the problem-encoding configuration, or a unitary operator being a product of two or more short-range unitary operators. Each round of the N rounds of operations includes evolving the quantum system by applying the sequence of unitary operators to the quantum system. Each round of the N rounds of operations includes performing a measurement of one or more qubits of the quantum system. The method includes outputting a result of the quantum computation. | 2022-06-30 |
20220207403 | TUNABLE CAPACITOR FOR SUPERCONDUCTING QUBITS - An exemplary tundable capacitor in a quantum system includes a pair of qubits, and a capacitive coupling element coupled between the pair of qubits. The capacitive coupling element includes a plurality of gate terminals. The capacitive coupling element is configured to receive a respective gate voltage at each of the plurality of gate terminals and to adjust a capacitance of the capacitive coupling element in response to the respective gate voltage received at each of the plurality of gate terminals. The capacitance of the capacitive coupling element is configured to control a coupling strength between the pair of qubits. | 2022-06-30 |
20220207404 | INPUT/OUTPUT SYSTEMS AND METHODS FOR SUPERCONDUCTING DEVICES - A quantum processor comprises a plurality of tiles, the plurality of tiles arranged in a first grid, and where a first tile of the plurality of tiles comprises a number of qubits (e.g., superconducting qubits). The quantum processor further comprises a shift register comprising at least one shift register stage communicatively coupled to a frequency-multiplexed resonant (FMR) readout, a qubit readout device, a plurality of digital-to-analog converter (DAC) buffer stages, and a plurality of shift-register-loadable DACs arranged in a second grid. The quantum processor may further include a transmission line comprising at least one transmission line inductance, a superconducting resonator, and a coupling capacitance that communicatively couples the superconducting resonator to the transmission line. A digital processor may program at least one of the plurality of shift-register-loadable DACs. Programming the first tile may be performed in parallel with programming a second tile of the plurality of tiles. | 2022-06-30 |
20220207405 | MICROWAVE-OPTIC CONVERSION SYSTEM OF QUANTUM SIGNALS EMPLOYING 3-DIMENSIONAL MICROWAVE RESONATOR AND CRYSTAL OSCILLATOR - An object of the present invention is to provide a microwave-optic conversion system of quantum signals employing a 3-dimensional microwave resonator and a crystal oscillator, which enables microwave-optic conversion employing a microwave resonator and a widely commercialized crystal oscillator which may be manufactured by simple machine processing. | 2022-06-30 |
20220207406 | METHODS FOR IMPLEMENTING ERROR-DIVIBLE QUANTUM GATES - An exemplary method for achieving an error divisible gate in a quantum system includes selecting an intrinsic gate error rate threshold for a gate coupled between a pair of qubits, and determining a first gate time to execute a full entangling gate rotation with a first error rate less than the intrinsic gate error rate threshold. The exemplary method further includes, based on the time to execute the full entangling gate, applying, to the gate, a second gate rotation having a second gate time less than the first gate time to determine a second error rate. The exemplary method further includes selecting the second gate time as a final gate time when the second error rate is smaller than the first error rate. | 2022-06-30 |
20220207407 | LOCALIZATION OF MACHINE LEARNING MODELS TRAINED WITH GLOBAL DATA - Systems, devices, and techniques are disclosed for localization of machine learning models trained with global data. Data sets of event data for users may be received. The data sets may belong to separate groups. The data sets of event data may be combined to generate a global data set. A matrix factorization model may be trained using the global data set to generate a globally trained matrix factorization model. A localization group data set may be generated including event data from the global data set for users from a first of the groups. The globally trained matrix factorization model may be trained with the localization group data set to generate a localized matrix factorization model for the first of the groups. | 2022-06-30 |
20220207408 | MAPPING MACHINE LEARNING ACTIVATION DATA TO A REPRESENTATIVE VALUE PALETTE - Mapping machine learning activation data to a representative value palette, including: selecting, from a plurality of activation values of a model execution, a plurality of representative values; identifying, for each activation value of the plurality of activation values, a representative value of the plurality of representative values; calculating, for each activation value of the plurality of activation values, a corresponding residual value as a difference between an activation value and a corresponding representative value; and storing, for each activation value of the plurality of activation values, the corresponding residual value and an index of the corresponding representative value. | 2022-06-30 |
20220207409 | TIMELINE RESHAPING AND RESCORING - A system, computer program product, and method are presented for facilitating determinations of risk including behavior classifications and predictions through timeline reshaping and rescoring of structured data. One embodiment of the method includes receiving, for one or more target focal objects, at least a portion of a transaction history including a plurality of sequential transactions, where the portion of the transaction history is associated with a first temporal range. The method also includes generating a first transaction timeline image representative of the portion of the transaction history, where the first temporal range includes a first temporal scaling. The method further includes labeling, through a machine learning (ML) model, the first transaction timeline image. The method also includes reshaping the first transaction timeline image, including rescaling the first temporal range, thereby generating a rescaled transaction timeline image, and labeling the rescaled transaction timeline image. | 2022-06-30 |
20220207410 | INCREMENTAL LEARNING WITHOUT FORGETTING FOR CLASSIFICATION AND DETECTION MODELS - A computing system, computer program product, and computer-implemented method for incremental learning without forgetting for a classification/detection model are provided. The method includes receiving, at a computing system, a classification/detection model including a base embedding space and corresponding base embedding vectors that are based on a base training dataset including base classes. The method also includes expanding the classification/detection model to account for a new training dataset including new classes by lifting the base embedding space to add an orthogonal subspace for the new classes, producing an expanded embedding space and corresponding expanded embedding vectors that are of a higher dimension than the base embedding vectors. In some embodiments, the method also includes further expanding the expanded classification/detection model to account for another new training dataset including additional new classes by lifting the expanded embedding space to add another orthogonal subspace for the additional new classes. | 2022-06-30 |
20220207411 | CLUSTERING OF MACHINE LEARNING (ML) FUNCTIONAL COMPONENTS - A graphics processing unit (GPU) for clustering of machine learning (ML) functional components, including: a plurality of compute units; a plurality of ML clusters, wherein each of the ML clusters comprises at least one arithmetic logic unit (ALU), and wherein each of the ML clusters is associated with a respective subset of the compute units; and a plurality of memory modules each positioned on the GPU adjacent to a respective ML cluster of the plurality of ML clusters, wherein each ML cluster is configured to directly access one or more adjacent memory modules. | 2022-06-30 |
20220207412 | DOMAIN-SPECIFIC CONSTRAINTS FOR PREDICTIVE MODELING - A machine learning system that incorporates arbitrary constraints is provided. The machine learning system selects a set of domain-specific constraints from a plurality of sets of domain-specific constraints. The machine learning system selects a set of general functional relationships from a plurality of sets of general functional relationships. The machine learning system maps the selected set of general functional relationships and the selected set of domain-specific constraints to a set of learning transforms. The machine learning system modifies a machine learning specification according to the set of learning transforms, wherein the machine learning specification specifies a model construction, a model setup, and a training objective function. The machine learning system optimizes a machine learning model according to the modified machine learning specification. | 2022-06-30 |
20220207413 | LOSS AUGMENTATION FOR PREDICTIVE MODELING - A machine learning system that incorporates arbitrary constraints into deep learning model is provided. The machine learning system provides a set of penalty data points en a set of arbitrary constraints in addition to a set of original training data points. The machine learning system assigns a penalty to each penalty data point in the set of penalty data points. The machine learning system optimizes a machine learning model by solving an objective function based on an original loss function and a penalty loss function. The original loss function is evaluated over a set of original training data points and the penalty loss function is evaluated over the set of penalty data points. The machine learning system provides the optimized machine learning model based on a solution of the objective function. | 2022-06-30 |
20220207414 | SYSTEM PERFORMANCE OPTIMIZATION - A system for providing performance optimization for a software solution may scan multiple predefined levels of the software solution to extract corresponding metadata information from each of the multiple predefined levels. The system may store the extracted corresponding metadata information pertaining to standard parameters associated with performance of the software solution. The system may determine a standard score based on a plurality of attributes of the extracted corresponding metadata information, optimize the determined standard score based on training data received from a learning model, and generate an insight information comprising information related to determined rule violations and of evaluation steps involved in determining the determined standard score. | 2022-06-30 |
20220207415 | PREDICTING COMPONENT LIFESPAN INFORMATION BY PROCESSING USER INSTALL BASE DATA AND ENVIRONMENT-RELATED DATA USING MACHINE LEARNING TECHNIQUES - Methods, apparatus, and processor-readable storage media for predicting component lifespan information by processing user install base data and environment-related data using machine learning techniques are provided herein. An example computer-implemented method includes obtaining install base data associated with at least one system component and environment-related data associated with usage of the at least one system component; performing feature analysis on at least a portion of the obtained data using a first set of machine learning techniques; clustering, based on the feature analysis, at least a portion of the install base data and at least a portion of the environment-related data into one or more groups using a second set of machine learning techniques; generating at least one lifespan information prediction attributed to the at least one system component based on the clustering; and performing at least one automated action based on the at least one lifespan information prediction. | 2022-06-30 |
20220207416 | System and method of providing correction assistance on machine learning workflow predictions - A system and method of for providing assistance to complete machine learning on workflow engines that deal with machine learning flows comprising operators configured in a coordinate grid. The process analyzes the positions and composition of operators, branches, inconsistencies, collisions and redundancy in the workflow in order to suggest to the user which changes should be made to the workflow. | 2022-06-30 |
20220207417 | TECHNIQUES FOR INTUITIVE MACHINE LEARNING DEVELOPMENT AND OPTIMIZATION - Various embodiments are generally directed to techniques for intuitive machine learning (ML) development and optimization, such as for application in a content services platform (CSP), for instance. Many embodiments include a ML model developer and a ML model evaluator to provide a graphical user interface that guides ML layman in developing, evaluating, implementing, managing, and/or optimizing ML models. Some embodiments are particularly directed to a common interface that provides a step-by-step user experience to develop and implement ML techniques. For example, embodiments may include computing a health score for various aspects of developing and/or optimizing ML models, and using the health score, and the factors contributing thereto, to guide production of a valuable ML model. These and other embodiments are described and claimed. | 2022-06-30 |
20220207418 | TECHNIQUES FOR DYNAMIC MACHINE LEARNING INTEGRATION - Various embodiments are generally directed to techniques for dynamically integrating ML functionality into computing systems, such as a content services platform (CSP), for instance. Many embodiments include ML integrated into a CSP and using production content as corpora (e.g., training and/or evaluation data). Some embodiments are particularly directed to generating and updating data for training and evaluating machine learning (ML) models, then making identified ML models available in various target environments. For example, embodiments may provide automatic, or semi-automatic, updating and deploying of ML models for making inferences, such as inferring labels for data in a content repository of a CSP. | 2022-06-30 |
20220207419 | PREDICTIVE ENGINE FOR TRACKING SELECT SEISMIC VARIABLES AND PREDICTING HORIZONS - An apparatus for processing seismic data variables comprising a tracking module and an interpretation module. The tracking module selects groupings of subsurface data variables from the seismic data variables, selects a subsurface data variable for each grouping, and determines an isochron variable for each subsurface data variable for each grouping. Each grouping of subsurface data variables has spatial coordinates values. The interpretation module predicts a horizon variable for each grouping using the isochron variable and an algorithmic model or trained algorithmic. The interpretation module predicts a horizon variable using the isochron variable for each grouping and a trained algorithmic model. The tracking module selects the subsurface data variable for each grouping based on a peak, trough or zero-crossing identified in the grouping. The trained algorithmic model uses multivariate classification or multivariate linear regression analysis using the isochron variables and associated seismic data variables against a dataset to predict the horizons. | 2022-06-30 |
20220207420 | UTILIZING MACHINE LEARNING MODELS TO CHARACTERIZE A RELATIONSHIP BETWEEN A USER AND AN ENTITY - In some implementations, a device may identify an account associated with a user, and the account may be managed by an entity. The device may determine, using at least one machine learning model, a classification of the user that indicates a level of quality of a relationship between the user and the entity. The device may determine, based on the classification determined using the at least one machine learning model, one or more adjustments that are to be applied to one or more charges assessed to the account by the entity. The device may apply the one or more adjustments to the one or more charges. | 2022-06-30 |
20220207421 | METHODS AND SYSTEMS FOR CROSS-PLATFORM USER PROFILING BASED ON DISPARATE DATASETS USING MACHINE LEARNING MODELS - Methods and systems for cross-platform user profiling based on disparate datasets using machine learning models. Specifically, the methods and systems comprising retrieving a cross-platform profile, wherein the cross-platform profile comprises a profile linked to an account, for a user, that is used across multiple assets. The methods and system may then update a status of the cross-platform profile based on incidents detected using machine learning models. The methods and system may then generate for presentation, in a user interface for the account, the status of cross-platform profile. | 2022-06-30 |
20220207422 | PREDICTIVE ENGINE FOR TRACKING SELECT SEISMIC VARIABLES AND PREDICTING HORIZONS - An apparatus for processing seismic data variables comprising a tracking module and an interpretation module. The tracking module selects groupings of subsurface data variables from the seismic data variables, selects a subsurface data variable for each grouping, and determines an isochron variable for each subsurface data variable for each grouping. Each grouping of subsurface data variables has spatial coordinates values. The interpretation module predicts a horizon variable for each grouping using the isochron variable and an algorithmic model or trained algorithmic. The interpretation module predicts a horizon variable using the isochron variable for each grouping and a trained algorithmic model. The tracking module selects the subsurface data variable for each grouping based on a peak, trough or zero-crossing identified in the grouping. The trained algorithmic model uses multivariate classification or multivariate linear regression analysis using the isochron variables and associated seismic data variables against a dataset to predict the horizons. | 2022-06-30 |
20220207423 | SYSTEM AND METHOD FOR GENERATING A PROCREANT FUNCTIONAL PROGRAM - A system for generating a procreant functional program includes a computing device configured to obtain a procreant marker as a function of a procreant system, determine a procreant appraisal as a function of the procreant marker, wherein determining further comprises producing a procreant enumeration as a function of the procreant marker, and determining the procreant appraisal as a function of the procreant enumeration, and a safe range, receive a conduct indicator, identify a functional signature as a function of the conduct indicator, wherein identifying further comprises obtaining a salubrious reference, and identifying the functional signature as a function of the salubrious reference and the conduct indicator using a functional machine-learning model, and generate a functional program as a function of the functional signature and procreant appraisal using a program machine-learning model. | 2022-06-30 |
20220207424 | ADAPTIVE TRAINING METHOD OF A BRAIN COMPUTER INTERFACE USING A PHYSICAL MENTAL STATE DETECTION - The present invention relates to an adaptive training method of a brain computer interface. The ECoG signals expressing the neural command of the subject are preprocessed to provide at each observation instant an observation data tensor to a predictive model that deduces therefrom a command data tensor making it possible to control a set of effectors. A satisfaction/error mental state decoder predicts at each epoch a satisfaction or error state from the observation data tensor. The mental state predicted at a given instant is used by an automatic data labelling module to generate on the fly new training data from the pair formed by the observation data tensor and the command data tensor at the preceding instant. The parameters of the predictive model are subsequently updated by minimising a cost function on the training data thus generated. | 2022-06-30 |