20th week of 2020 patent applcation highlights part 54 |
Patent application number | Title | Published |
20200151523 | DETERMINING A PROCESSING SEQUENCE FOR PROCESSING AN IMAGE - A method is for determining a processing sequence for processing an image, the processing sequence including a plurality of algorithms, each respective algorithm of the plurality of algorithms being configured to perform an image processing process on the image to generate a respective output. In an embodiment, the method includes determining one or more required outputs from the processing sequence; and determining, using a data processing system, the processing sequence based on the one or more required outputs determined, the data processing system being configured based on sequences previously determined. | 2020-05-14 |
20200151524 | Automated Content Evaluation Using a Predictive Model - There are provided systems and methods for performing automated content evaluation. In one implementation, the system includes a hardware processor and a system memory storing a software code including a predictive model trained based on an audience response to training content. The hardware processor executes the software code to receive images, each image including facial landmarks of an audience member viewing the content during its duration, and for each image, transforms the facial landmarks to a lower dimensional facial representation, resulting in multiple lower dimensional facial representations of each audience member. For each of a subset of the lower dimensional facial representations of each audience member, the software code utilizes the predictive model to predict one or more responses to the content, resulting in multiple predictions for each audience member, and classifies one or more time segment(s) in the duration of the content based on an aggregate of the predictions. | 2020-05-14 |
20200151525 | METHOD FOR MATCHING CHARACTERS AND ATTRIBUTES BETWEEN SYSTEMS TO PERMIT RECORD MATCHING - A method of matching characters and attributes between data processing systems includes: storing an image that includes identifying indicia in a first data processing system; transmitting the image to a second data processing system; receiving the image at the second data processing system; using the second data processing system, performing optical character recognition of the image; using the second data processing system, interpreting an output of the optical character recognition process to produce an interpretation; transmitting the interpretation from the second data processing system to the first data processing system; receiving the interpretation at the first data processing system; using the first data processing system, storing the interpretation in a record in which the interpretation is associated with: the second data processing system; and at least one of a specific user and a specific vehicle. | 2020-05-14 |
20200151526 | SYSTEM, METHOD FOR CONTROLLING THE SAME, AND METHOD FOR CONTROLLING SERVER - A system includes a server and a printing apparatus. The server manages a first printing content and a second printing content as a predetermined printing target. When an instruction user issues a speech instruction for printing the predetermined printing target to an audio control device as an n-th speech print instruction, the server selects the first printing content associated with the predetermined printing target. | 2020-05-14 |
20200151527 | DURABLE RFID PRINTED FABRIC LABELS - Durable fabric RFID labels are provided for mounting on garments, fabrics and other fabric-containing items, the mounting and durability being before, during or after manufacturing and processing of the items. These labels are robust enough to withstand processing during manufacturing, while being capable of remaining on the item during inventory handling, merchandising and consumer use, including washing and drying. The durable labels include an RFID inlay, a face sheet overlying a first face of the RFID inlay, and a functional adhesive, such as a hot-melt adhesive, overlying a second face of the RFID inlay. The face sheet can be of printable material or have indicia or be a printed face sheet. The functional adhesive can be of a moisture-resistive type. The RFID inlay can be encased within a pocket of polymeric material. A polymeric sheet reinforcement layer can be adhered to and cover all or a portion of the RFID inlay. | 2020-05-14 |
20200151528 | PERSONALIZED PATTERN-BASED COMMODITY VIRTUAL CODE ASSIGNMENT METHOD AND SYSTEM - A personalized pattern-based commodity virtual code assignment method and system, which assign commodity codes to commodities but not print out. The present invention prints a naturally formed, two-dimensional and random personalized feature pattern on each commodity with the traditional processes such as forme-based printing; collects personalized feature information and assigns commodity codes; and associates and stores the personalized feature information and the commodity codes into a preset database. The commodity code can be retrieved and acquired from the database by scanning the personalized feature pattern on a commodity with a client. With the present invention, code assignment can be conducted without a digital printer, so no digital printing process is required, and the code assignment cost can be reduced. With the method of printing personalized patterns and associating commodity codes, the present invention opens up another way to realize commodity code assignment and creates the code assignment of non-digital printers. | 2020-05-14 |
20200151529 | A METHOD FOR STORING DATA INDENTIFYING FABRIC INFORMATION DATA CARRIER - A method for storing data for identifying fabric information, comprises: dividing the fabric information into N types of data information according to information types, and storing the N types of data information on a data carrier in a form of codes respectively, wherein different codes in a same type of data information represent different pieces of the fabric information defined specifically. The fabric information is classified according to information types, and each type of data information represents specific defined information in the form of codes, thereby saving the storage space occupied by the fabric information. Also disclosed is a data carrier which can store a large amount of fabric information with a small space, the data carrier can acquire codes stored in the data carrier through identification by a corresponding identifier, and the identifier can further acquire specific fabric information according to the codes. | 2020-05-14 |
20200151530 | CARD WITH ERGONOMIC TEXTURED GRIP - Approaches herein provide a transaction card with an ergonomic textured grip. In some approaches, a card includes a body having a first main side and a second main side, and an identification chip along the first main side of the body. The card may further include a textured grip along the second main side of the body, wherein the textured grip comprises a plurality of curvilinear grip elements extending in an undulating arrangement between a first end and a second end of the textured grip. | 2020-05-14 |
20200151531 | RFID TOOL CONTROL SYSTEM - A tuned radio frequency identification (RFID) label retrofittable to an article of manufacture, comprising: an RFID tag having a lower side and an upper side opposite the lower side, the lower side configured to rest against an article of manufacture; a cushioning layer having a lower side and an upper side opposite the lower side, the lower side covering the RFID tag, the cushioning layer attached to the upper side of the RFID tag by a first adhesive layer; and a skirt layer having a lower side attached by a second adhesive layer to the upper side of the cushioning layer. | 2020-05-14 |
20200151532 | TAGGING OBJECTS IN INDOOR SPACES USING AMBIENT, DISTRIBUTED BACKSCATTER - A product tagging system is provided. The product tagging system includes at least one RF backscatter transmitter configured to emit (i) a main carrier RF signal, and (ii) Radio Frequency (RF) signals on two frequencies whose summation forms a twin carrier RF signal. The product tagging system further includes a passive RF backscatter tag associated with a product and configured to reflect and frequency shift the main carrier RF signal to a different frequency using the twin carrier RF signal. The product tagging system also includes at least one RF backscatter receiver configured to read the product on the different frequency by detecting a distributed ambient backscatter signal generated by a reflection and frequency shifting of the main carrier RF signal by the passive RF backscatter tag. | 2020-05-14 |
20200151533 | Systems, Methods, And Devices For Commissioning Wireless Sensors - In one embodiment the present invention comprises a smartphone and encoders for commissioning RFID transponders. The present invention further includes novel systems, devices, and methods for commissioning RFID transponders with unique object class instance numbers without requiring a realtime connection to a serialization database. | 2020-05-14 |
20200151534 | SMART CARDS WITH METAL LAYER(S) AND METHODS OF MANUFACTURE - Metal layers of a smartcard may be provided with slits overlapping at least a portion of a module antenna in an associated transponder chip module disposed in the smartcard so that the metal layer functions as a coupling frame. One or more metal layers may be pre-laminated with plastic layers to form a metal core or clad subassembly for a smartcard, and outer printed and/or overlay plastic layers may be laminated to the front and/or back of the metal core. Front and back overlays may be provided. Pre-laminated metal layers having an array of card sites, with each position having a defined area prepared for the later implanting of a transponder chip module characterized by different sized perforations and gaps around this defined area adjacent to the RFID slit(s), to facilitate the quick removal of the metal in creating a pocket to accept a transponder chip module. | 2020-05-14 |
20200151535 | METAL SMART CARD WITH DUAL INTERFACE CAPABILITY - A smart card having a metal layer, an opening in the metal layer and a dual interface integrated circuit (IC) module and a plug non-RF-impeding material mounted in the opening, with at least one at least one additional layer stacked relative to the plug. | 2020-05-14 |
20200151536 | ANTENNA STRUCTURE AND DEVICE USING THE SAME - An antenna structure comprises an impedance matching part, a first conductive structure and a second conductive structure. The first conductive structure with a first length along a first direction is coupled to a first side of the impedance matching part and has a plurality of first polygon conductive structures, each of which is coupled to each other through a first conductive element. The second conductive structure with a second length along a first direction is coupled to a second side of the impedance matching part and has a plurality of second polygon conductive structures, each of which is coupled to each other through a second conductive element, wherein the second length is larger than the first length. The first and second polygon conductive structures are protrusion toward the second direction. In one embodiment, the antenna structure can be applied on an object having metal housing or liquid contained therein. | 2020-05-14 |
20200151537 | SYSTEMS AND METHODS FOR GENERATING SECURE TAGS - Systems and methods are provided for decoding secure tags using an authentication server and secure tag reader. The system can include at least one processor and at least one non-transitory memory. The memory can contain instructions that, when executed by the at least one processor, cause the secure tag reader to perform operations. The operations can include detecting a potential secure tag in an image and generating a normalized secure tag image using the image and a stylesheet. The operations can further include providing an identification request to an authentication server, the identification request including the normalized secure tag image. The operations can additionally include receiving rules for decoding tag data encoded into the secure tag as tag feature options and decoding the tag data using the received rules. | 2020-05-14 |
20200151538 | AUTOMATIC FEATURE EXTRACTION FROM AERIAL IMAGES FOR TEST PATTERN SAMPLING AND PATTERN COVERAGE INSPECTION FOR LITHOGRAPHY - According to one or more embodiments of the present invention a computer-implemented method for fabricating a chip includes generating, using an aerial image generation system, a set of aerial images for a chip layout, the set of aerial images including an aerial image corresponding to each region from the chip layout. The method further includes automatically determining, using an artificial neural network, a feature vector for each aerial image from the set of aerial images. The method further includes clustering the aerial images using their corresponding feature vectors. The method further includes selecting, as test samples, a predetermined number of aerial images from each cluster. The method further includes performing a pattern coverage inspection of the chip layout using the aerial images that are selected as test samples. | 2020-05-14 |
20200151539 | STORAGE DEVICE INFERRING READ LEVELS BASED ON ARTIFICIAL NEURAL NETWORK MODEL AND LEARNING METHOD OF ARTIFICIAL NEURAL NETWORK MODEL - A storage device includes a non-volatile memory including a plurality of blocks, a buffer memory that stores a plurality of on-cell counts, which are generated by reading memory cells connected to a plurality of reference word lines of the plurality of blocks by using a read level, and an artificial neural network model, and a controller that inputs an on-cell count corresponding to a target block among the plurality of on-cell counts and a number of a target word line of the target block to the artificial neural network model, and infers a plurality of read levels for reading data of memory cells connected to the target word line using the artificial neural network model. | 2020-05-14 |
20200151540 | LEARNING DEVICE, ESTIMATING DEVICE, LEARNING METHOD, AND COMPUTER PROGRAM PRODUCT - A learning device includes one or more processors. The processors calculate a likelihood of belonging to a plurality of estimated classes, of learning data, by using an estimation model for estimating to which of the estimated classes input data belongs. The processors calculate a weight of a loss function to be used in learning the estimation model such that, when a likelihood of a first class that is closer to correct data than other estimated classes among the estimated classes and likelihoods of a second class and a third class that are adjacent to the first class are applied to a function having a predetermined shape, a position that has an extreme value of the function corresponds to the correct data. The processors learn the estimation model by using the loss function. | 2020-05-14 |
20200151541 | Efficient Convolutional Neural Networks - The present disclosure advantageously provides a system and a method for convolving data in a quantized convolutional neural network (CNN). The method includes selecting a set of complex interpolation points, generating a set of complex transform matrices based, at least in part, on the set of complex interpolation points, receiving an input volume from a preceding layer of the quantized CNN, performing a complex Winograd convolution on the input volume and at least one filter, using the set of complex transform matrices, to generate an output volume, and sending the output volume to a subsequent layer of the quantized CNN. | 2020-05-14 |
20200151542 | QUESTION AND ANSWER MATCHING METHOD, SYSTEM AND STORAGE MEDIUM - The specification discloses a question answer matching method, system and computer storage medium. The method comprises: transforming the user query and one of one or more suggested answers corresponding to the user query by using a pre-trained word vector to obtain vector representations of the user query and the one of one or more suggested answers corresponding to the user query; performing a convolutional operation on the vector representations of the user query and the one of one or more suggested answers, respectively, to extract features; and mapping convolution results of the vector representations of the user query and the vector expression of the one of one or more suggested answers into a sample annotating space, to obtain a matching result of the user query. | 2020-05-14 |
20200151543 | Programming Methods For Neural Network Using Non-volatile Memory Array - An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs. Various algorithms for tuning the memory cells to contain the correct weight values are disclosed. | 2020-05-14 |
20200151544 | RECURRENT NEURAL NETWORKS FOR ONLINE SEQUENCE GENERATION - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a target sequence from a source sequence. In one aspect, the system includes a recurrent neural network configured to, at each time step, receive an input for the time step and process the input to generate a progress score and a set of output scores; and a subsystem configured to, at each time step, generate the recurrent neural network input and provide the input to the recurrent neural network; determine, from the progress score, whether or not to emit a new output at the time step; and, in response to determining to emit a new output, select an output using the output scores and emit the selected output as the output at a next position in the output order. | 2020-05-14 |
20200151545 | UPDATE OF ATTENUATION COEFFICIENT FOR A MODEL CORRESPONDING TO TIME-SERIES INPUT DATA - Provided are a computer program product, a learning apparatus and a learning method. The method includes calculating a first propagation value that is propagated from a propagation source node to a propagation destination node in a neural network including nodes, based on node values of the propagation source node at time points and a weight corresponding to passage of time points based on a first attenuation coefficient. The method also includes updating the first attenuation coefficient by using a first update parameter, that is based on a first propagation value, and an error of the node value of the propagation destination node. | 2020-05-14 |
20200151546 | IDENTIFYING IMAGE AESTHETICS USING REGION COMPOSITION GRAPHS - The disclosed computer-implemented method may include generating a three-dimensional (3D) feature map for a digital image using a fully convolutional network (FCN). The 3D feature map may be configured to identify features of the digital image and identify an image region for each identified feature. The method may also include generating a region composition graph that includes the identified features and image regions. The region composition graph may be configured to model mutual dependencies between features of the 3D feature map. The method may further include performing a graph convolution on the region composition graph to determine a feature aesthetic value for each node according to the weightings in the node's weighted connecting segments, and calculating a weighted average for each node's feature aesthetic value to provide a combined level of aesthetic appeal for the digital image. Various other methods, systems, and computer-readable media are also disclosed. | 2020-05-14 |
20200151547 | SOLUTION FOR MACHINE LEARNING SYSTEM - Disclosed is a computer-implemented method for estimating an uncertainty of a prediction generated by a machine learning system, the method including: receiving first data; training a first machine learning model component of a machine learning system with the received first data, the first machine learning model component is trained to generate a prediction; generating an uncertainty estimate of the prediction; training a second machine learning model component of the machine learning system with second data, the second machine learning model component is trained to generate a calibrated uncertainty estimate of the prediction. Also disclosed is a corresponding system. | 2020-05-14 |
20200151548 | OPTIMIZATION DEVICE AND CONTROL METHOD OF OPTIMIZATION DEVICE - A processor holds the second change value of the energy value which has been calculated by the processor and corresponds to each of a predetermined number of state transitions, in entries corresponding to the input identification information k. When the processor stochastically determines whether or not any of the state transitions is accepted, by a relative relation between the first change value and thermal excitation energy based on the temperature value, the first change value of the energy value calculated by the processor, and a random number value, the processor stochastically determines whether or not any of the state transitions is accepted, by adding the offset value y to the first change value. The offset value y is obtained by multiplying the second change value held by any entry selected from the entries based on the input identification information k, by coefficient Information α. | 2020-05-14 |
20200151549 | NEURAL PROCESSING UNIT, NEURAL PROCESSING SYSTEM, AND APPLICATION SYSTEM - Provided is a neural processing unit that performs application-work including a first neural network operation, the neural processing unit includes a first processing core configured to execute the first neural network operation, a hardware block reconfigurable as a hardware core configured to perform hardware block-work, and at least one processor configured to execute computer-readable instructions to distribute a part of the application-work as the hardware block-work to the hardware block based on a first workload of the first processing core. | 2020-05-14 |
20200151550 | Machine Learning Accelerator - A neural network circuit for providing a threshold weighted sum of input signals comprises at least two arrays of transistors with programmable threshold voltage, each transistor storing a synaptic weight as a threshold voltage and having a control electrode for receiving an activation input signal. Additionally, for each array of transistors, a reference network associated therewith, which provides a reference signal to be combined with the positive or negative weight current components of the transistors of the associated array, the reference signal having opposite sign compared to the weight current components of the associated array, thereby providing the threshold of the weighted sums of the currents. Further, at least one bitline is configured to receive the combined positive and/or negative current components, each combined with their associated reference signals. | 2020-05-14 |
20200151551 | SYSTEMS AND METHODS FOR DETERMINING AN ARTIFICIAL INTELLIGENCE MODEL IN A COMMUNICATION SYSTEM - A system may include multiple client devices and a processing device communicatively coupled to the client devices. Each client device includes an artificial intelligence (AI) chip and is configured to generate an AI model. The processing device may be configured to (i) receive a respective AI model and an associated performance value of the respective AI model from each of the plurality of client devices; (ii) determine an optimal AI model based on the performance values associated with the respective AI models from the plurality of client devices; and (iii) determine a global AI model based on the optimal AI model. The system may load the global AI model into an AI chip of a client device to cause the client device to perform an AI task based on the global AI model in the AI chip. The AI model may include a convolutional neural network. | 2020-05-14 |
20200151553 | MACHINE LEARNING BASED AIRFLOW SENSING FOR AIRCRAFT - Using a set of airflow sensors disposed on an airfoil of an aircraft, first airflow data including an amount of airflow experienced at each airflow sensor at a first time is measured. Using a trained neural network model, the first airflow data is analyzed to determine an airflow state of the aircraft. In response to determining that the aircraft is in the abnormal airflow state, a control surface and a power unit of the aircraft are adjusted. Responsive to the adjusting, the aircraft is returned to the normal airflow state. | 2020-05-14 |
20200151554 | MACHINE LEARNING BASED MODEL FOR SPECTRAL SCAN AND ANALYSIS - Various aspects of the subject technology relate to methods, systems, and machine-readable media for classifying interfering devices. The method includes collecting data samples from a set of known interfering devices, the data samples having characteristics of the set of known interfering devices. The method also includes compiling known feature vectors corresponding to the characteristics of the set of known interfering devices. The method also includes executing the known feature vectors on machine learning algorithms to train the machine learning algorithms to classify the set of known interfering devices. The method also includes generating a machine learning model based on performances of the machine learning algorithms, the machine learning model including at least one of the machine learning algorithms. The method also includes executing future feature vectors corresponding to a set of future interfering devices on the machine learning model to classify the set of future interfering devices. | 2020-05-14 |
20200151555 | DETECTING AND REDUCING BIAS IN MACHINE LEARNING MODELS - A method identifies and removes bias from a machine learning model. A user/computer inputs a plurality of input training data into a machine learning system to generate an output of labeled output data. The user/computer evaluates the labeled output data according to a consistency metric to associate the labeled output data with a corresponding consistency assessment. The user/computer selects each labeled output data having a consistency assessment indicating a consistency assessment that is greater than a predetermined threshold to form a labeled output data subset, and then creates additional labeling for the labeled output data subset. The user/computer utilizes the additional labeling to distinguish each labeled training data from labeled output data subset as being mislabeled and biased, and then adjusts the learning machine based on the labeled output data subset being mislabeled and biased. | 2020-05-14 |
20200151556 | INFERENCE FOCUS FOR OFFLINE TRAINING OF SRAM INFERENCE ENGINE IN BINARY NEURAL NETWORK - A Static Random Access Memory (SRAM) device in a binary neural network is provided. The SRAM device includes an SRAM inference engine having an SRAM computation architecture with a forward path that include multiple SRAM cells. The multiple SRAM cells are configured to form a chain of SRAM cells such that an output of a given one of the multiple SRAM cells is an input to a following one of the multiple SRAM cells. The SRAM computation architecture is configured to compute a prediction from an input. | 2020-05-14 |
20200151557 | METHOD AND SYSTEM FOR DEEP NEURAL NETWORKS USING DYNAMICALLY SELECTED FEATURE-RELEVANT POINTS FROM A POINT CLOUD - Methods and systems for deep neural networks using dynamically selected feature-relevant points from a point cloud are described. A plurality of multidimensional feature vectors arranged in a point-feature matrix are received. Each row of the point-feature matrix corresponds to a respective one of the multidimensional feature vectors, and each column of the point-feature matrix corresponds to a respective feature. Each multidimensional feature vector represents a respective unordered point from a point cloud and each multidimensional feature vector includes a respective plurality of feature-correlated values, each feature-correlated value represents a correlation extent of the respective feature. A reduced-max matrix having a selected plurality of feature-relevant vectors is generated. The feature-relevant vectors are selected by, for each respective feature, identifying a respective multidimensional feature vector in the point-feature matrix having a maximum feature-correlated value associated with the respective feature. The reduced-max matrix is output to at least one neural network layer. | 2020-05-14 |
20200151558 | SYSTEMS AND METHODS FOR UPDATING AN ARTIFICIAL INTELLIGENCE MODEL BY A SUBSET OF PARAMETERS IN A COMMUNICATION SYSTEM - A system may be configured to obtain a global artificial intelligence (AI) model for uploading into an AI chip to perform AI tasks. The system may implement a training process including receiving updated AI models from one or more client devices, determining a global AI model based on the received AI models from the client devices, and updating initial AI models for the client devices. Each client device may receive an initial AI model and train an updated AI model by training the entire parameters of the AI model together, by training a subset of the parameters of the AI model in a layer by layer fashion, or by training a subset of the parameters by parameter types. Each client device may include one or more AI chips configured to run an AI task to measure performance of an AI model. The AI model may include a convolutional neural network. | 2020-05-14 |
20200151559 | STYLE-BASED ARCHITECTURE FOR GENERATIVE NEURAL NETWORKS - A style-based generative network architecture enables scale-specific control of synthesized output data, such as images. During training, the style-based generative neural network (generator neural network) includes a mapping network and a synthesis network. During prediction, the mapping network may be omitted, replicated, or evaluated several times. The synthesis network may be used to generate highly varied, high-quality output data with a wide variety of attributes. For example, when used to generate images of people's faces, the attributes that may vary are age, ethnicity, camera viewpoint, pose, face shape, eyeglasses, colors (eyes, hair, etc.), hair style, lighting, background, etc. Depending on the task, generated output data may include images, audio, video, three-dimensional (3D) objects, text, etc. | 2020-05-14 |
20200151560 | APPARATUS AND METHODS FOR USING BAYESIAN PROGRAM LEARNING FOR EFFICIENT AND RELIABLE GENERATION OF KNOWLEDGE GRAPH DATA STRUCTURES - In some embodiments, an apparatus includes a memory and a processor operatively coupled to the memory. The processor can be configured to receive multiple heterogeneous data records from at least one data source. The processor is configured to extract a set of features from the multiple data records, and to normalize the extracted set of features. The processor is configured to selectively combine the extracted set of features to define entity records. The processor is configured to associate two or more entity records to form relationships that have an indication of relation type and an indication of relation likelihood. In some embodiments, the processor can be configured to generate a knowledge graph data structure on the entity records and relationships. | 2020-05-14 |
20200151561 | SIGNAL GENERATION DEVICE, SIGNAL GENERATION LEARNING DEVICE, METHOD, AND PROGRAM - A signal generation device includes a variable generation unit and a signal generation unit. The variable generation unit generates a plurality of latent variables corresponding to a plurality of features of a signal. The signal generation unit inputs, to at least one neural network learned in advance, a latent variable representing attributes obtained by converting a part of the plurality of latent variables by an attribute vector representing attributes of a signal to be generated and the other part of the plurality of latent variables representing an identity and generates the signal to be generated using the at least one neural network. | 2020-05-14 |
20200151562 | TRAINING ACTION SELECTION NEURAL NETWORKS USING APPRENTICESHIP - An off-policy reinforcement learning actor-critic neural network system configured to select actions from a continuous action space to be performed by an agent interacting with an environment to perform a task. An observation defines environment state data and reward data. The system has an actor neural network which learns a policy function mapping the state data to action data. A critic neural network learns an action-value (Q) function. A replay buffer stores tuples of the state data, the action data, the reward data and new state data. The replay buffer also includes demonstration transition data comprising a set of the tuples from a demonstration of the task within the environment. The neural network system is configured to train the actor neural network and the critic neural network off-policy using stored tuples from the replay buffer comprising tuples both from operation of the system and from the demonstration transition data. | 2020-05-14 |
20200151563 | METHOD FOR SUPERVISED GRAPH SPARSIFICATION - A method for employing a supervised graph sparsification (SGS) network to use feedback from subsequent graph learning tasks to guide graph sparsification is presented. The method includes, in a training phase, generating sparsified subgraphs by edge sampling from input training graphs following a learned distribution, feeding the sparsified subgraphs to a prediction/classification component, collecting a predication/classification error, and updating parameters of the learned distribution based on a gradient derived from the predication/classification error. The method further includes, in a testing phase, generating sparsified subgraphs by edge sampling from input testing graphs following the learned distribution, feeding the sparsified subgraphs to the prediction/classification component, and outputting prediction/classification results to a visualization device. | 2020-05-14 |
20200151564 | SYSTEM AND METHOD FOR MULTI-AGENT REINFORCEMENT LEARNING WITH PERIODIC PARAMETER SHARING - A system and method for multi-agent reinforcement learning with periodic parameter sharing that include inputting at least one occupancy grid to a convolutional neural network (CNN) and at least one vehicle dynamic parameter into a first fully connected layer and concatenating outputs of the CNN and the first fully connected layer. The system and method also include providing Q value estimates for agent actions based on processing of the concatenated outputs and choosing at least one autonomous action to be executed by at least one of: an ego agent and a target agent. The system and method further include processing a multi-agent policy that accounts for operation of the ego agent and the target agent with respect to one another within a multi-agent environment based on the at least one autonomous action to be executed by at least one of: the ego agent and the target agent. | 2020-05-14 |
20200151565 | Applied Artificial Intelligence Technology for Processing Trade Data to Detect Patterns Indicative of Potential Trade Spoofing - Various techniques are described for using machine-learning artificial intelligence to improve how trading data can be processed to detect improper trading behaviors such as trade spoofing. In an example embodiment, semi-supervised machine learning is applied to positively labeled and unlabeled training data to develop a classification model that distinguishes between trading behavior likely to qualify as trade spoofing and trading behavior not likely to qualify as trade spoofing. Also, clustering techniques can be employed to segment larger sets of training data and trading data into bursts of trading activities that are to be assessed for potential trade spoofing status. | 2020-05-14 |
20200151566 | SYSTEM AND METHOD FOR IMPLEMENTING AN ARTIFICIALLY INTELLIGENT VIRTUAL ASSISTANT USING MACHINE LEARNING - Systems and methods for implementing an artificially intelligent virtual assistant includes collecting a user query; using a competency classification machine learning model to generate a competency label for the user query; using a slot identification machine learning model to segment the text of the query and label each of the slots of the query; generating a slot value for each of the slots of the query; generating a handler for each of the slot values; and using the slot values to: identify an external data source relevant to the user query, fetch user data from the external data source, and apply one or more operations to the query to generate response data; and using the response data, to generate a response to the user query. | 2020-05-14 |
20200151567 | TRAINING SEQUENCE GENERATION NEURAL NETWORKS USING QUALITY SCORES - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a sequence generation neural network. One of the methods includes obtaining a batch of training examples; for each of the training examples: processing the training network input in the training example using the neural network to generate an output sequence; for each particular output position in the output sequence: identifying a prefix that includes the system outputs at positions before the particular output position in the output sequence, for each possible system output in the vocabulary, determining a highest quality score that can be assigned to any candidate output sequence that includes the prefix followed by the possible system output, and determining an update to the current values of the network parameters that increases a likelihood that the neural network generates a system output at the position that has a high quality score. | 2020-05-14 |
20200151568 | QUANTIZATION METHOD AND DEVICE FOR WEIGHTS OF BATCH NORMALIZATION LAYER - An embodiment of the present invention provides a quantization method for weights of a plurality of batch normalization layers, including: receiving a plurality of previously learned first weights of the plurality of batch normalization layers; obtaining first distribution information of the plurality of first weights; performing a first quantization on the plurality of first weights using the first distribution information to obtain a plurality of second weights; obtaining second distribution information of the plurality of second weights; and performing a second quantization on the plurality of second weights using the second distribution information to obtain a plurality of final weights, and thereby reducing an error that may occur when quantizing the weight of the batch normalization layer. | 2020-05-14 |
20200151569 | WARPING SEQUENCE DATA FOR LEARNING IN NEURAL NETWORKS - Methods and systems for classification of sequence data include warping training sequence data according to a warping pattern. A neural network is trained using the warped training sequence data. Input sequence data is warped according to the warping pattern. The warped input sequence data is classified using the trained neural network. | 2020-05-14 |
20200151570 | Training System for Artificial Neural Networks Having a Global Weight Constrainer - An architecture for training the weights of artificial neural networks provides a global constrainer modifying the neuron weights in each iteration not only by the back-propagated error but also by a global constraint constraining these weights based on the value of all weights at that iteration. The ability to accommodate a global constraint is made practical by using a constrained gradient descent which approximates the error gradient deduced in the training as a plane, offsetting the increased complexity of the global constraint. | 2020-05-14 |
20200151571 | TRANSPOSED SPARSE MATRIX MULTIPLY BY DENSE MATRIX FOR NEURAL NETWORK TRAINING - Machine learning systems that implement neural networks typically operate in an inference mode or a training mode. In the training mode, inference operations are performed to help guide the training process. Inference mode operation typically involves forward propagation and intensive access to certain sparse matrices, encoded as a set of vectors. Back propagation and intensive access to transposed versions of the same sparse matrices provide training refinements. Generating a transposed version of a sparse matrix can consume significant additional memory and computation resources. In one embodiment, two additional encoding vectors are generated, providing efficient operations on sparse matrices and also on transposed representations of the same sparse matrices. In a neural network the efficient operations can reduce the amount of memory needed for backpropagation and reduce power consumption. | 2020-05-14 |
20200151572 | Using Multiple Functional Blocks for Training Neural Networks - A system is described that performs training operations for a neural network, the system including an analog circuit element functional block with an array of analog circuit elements, and a controller. The controller monitors error values computed using an output from each of one or more initial iterations of a neural network training operation, the one or more initial iterations being performed using neural network data acquired from the memory. When one or more error values are less than a threshold, the controller uses the neural network data from the memory to configure the analog circuit element functional block to perform remaining iterations of the neural network training operation. The controller then causes the analog circuit element functional block to perform the remaining iterations. | 2020-05-14 |
20200151573 | DYNAMIC PRECISION SCALING AT EPOCH GRANULARITY IN NEURAL NETWORKS - A processor determines losses of samples within an input volume that is provided to a neural network during a first epoch, groups the samples into subsets based on losses, and assigns the subsets to operands in the neural network that represent the samples at different precisions. Each subset is associated with a different precision. The processor then processes the subsets in the neural network at the different precisions during the first epoch. In some cases, the samples in the subsets are used in a forward pass and a backward pass through the neural network. A memory configured to store information representing the samples in the subsets at the different precisions. In some cases, the processor stores information representing model parameters of the neural network in the memory at the different precisions of the subsets of the corresponding samples. | 2020-05-14 |
20200151574 | COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN LEARNING PROGRAM, LEARNING METHOD, AND LEARNING APPARATUS - A learning method includes: acquiring input data and correct answer information, the input data including a set of multiple pieces of relationship data in which relationships between variables are recorded respectively; determining conversion rule corresponding to each of the multiple pieces of relationship data such that relationships before and after a conversion of a common variable commonly in the multiple pieces of relationship data are the same, when converting a variable value in each of the multiple pieces of relationship data into converted data rearranging the variable values in an order of input; converting each of the multiple pieces of relationship data into a multiple pieces of the converted data according to each corresponding conversion rule; and inputting a set of the multiple pieces of converted data to the neural network and causing the neural network to learn a learning model based on the correct answer information. | 2020-05-14 |
20200151575 | METHODS AND TECHNIQUES FOR DEEP LEARNING AT SCALE OVER VERY LARGE DISTRIBUTED DATASETS - An apparatus, method and computer program product for neural network training over very large distributed datasets, wherein a relational database management system (RDBMS) is executed in a computer system comprised of a plurality of compute units, and the RDBMS manages a relational database comprised of one or more tables storing data. One or more local neural network models are trained in the compute units using the data stored locally on the compute units. At least one global neural network model is generated in the compute units by aggregating the local neural network models after the local neural network models are trained. | 2020-05-14 |
20200151576 | TRAINING ADAPTABLE NEURAL NETWORKS BASED ON EVOLVABILITY SEARCH - Systems and methods are disclosed herein for training neural networks that can be adapted to new inputs, new tasks, new environment, etc. by re-training them efficiently. A parameter vector is initialized for a neural network. Perturbed parameter vectors are determined using the parameter vector. Behavior characteristics are determined for each perturbed parameter vector. The parameter vector is modified by moving it in the parameter vector space in a direction that maximizes a diversity metric. Other neural networks can be trained for new tasks or new environments using the parameter vector of the neural network. | 2020-05-14 |
20200151577 | INTELLIGENT ENDPOINT SYSTEMS FOR MANAGING EXTREME DATA - A system and methods are provided that can make distributed and autonomous decision science based recommendations, decisions, and actions that increasingly become smarter and faster over time. The system can comprise intelligent computing devices and components (i.e., Intelligent Endpoint Systems) at the edge or endpoints of the network (e.g., user devices or IoT devices). Each of these Intelligent Endpoint Systems can optionally have the ability to transmit and receive new data or decision science, software, data, and metadata to other intelligent devices and third party components and devices so that data or decision science, whether real-time or near real-time, batch, or manual processing, can be updated and data or decision science driven queries, recommendations and autonomous actions can be broadcasted to other Intelligent Endpoint Systems and third party systems in real-time or near real-time. | 2020-05-14 |
20200151578 | DATA SAMPLE LABEL PROCESSING METHOD AND APPARATUS - Disclosed are a data sample label processing method and apparatus. The data sample label processing method comprises: obtaining a first set of data samples without determined labels and a second set of data samples with determined labels; performing an iteration with the following steps until an accuracy rate meets a preset requirement: training a prediction model based on a combination of the first set of data samples and the second set of data samples; inputting data samples from the first set of data samples into the prediction model to obtain prediction values as learning labels for each data sample, and associating the learning labels with the data samples respectively; obtaining a subset from the first set of data samples, wherein the subset comprise data samples associated with learning labels; obtaining determined labels for the data samples in the subset; obtaining the accuracy rate based at least on the learning labels of the data samples in the subset and the determined labels of the data samples in the subset; and if the accuracy rate does not meet the preset requirement, labeling the data samples in the subset with the determined labels for the data samples in the subset, and moving the subset from the first set of data samples to the second set of data samples; and after the iteration ends, labeling the remaining data samples in the first set with the associated learning labels. | 2020-05-14 |
20200151579 | SYSTEM FOR MANAGING CALCULATION PROCESSING GRAPH OF ARTIFICIAL NEURAL NETWORK AND METHOD OF MANAGING CALCULATION PROCESSING GRAPH BY USING THE SAME - Provided are a system for managing a calculation processing graph of an artificial neural network and a method of managing a calculation processing graph by using the system. A system for managing a calculation processing graph of an artificial neural network run by a plurality of heterogeneous resources includes: a task manager configured to allocate the plurality of heterogeneous resources to a first subgraph and a second subgraph that are to be run, the first subgraph and the second subgraph being included in the calculation processing graph; a first compiler configured to compile the first subgraph to be executable on a first resource among the plurality of heterogeneous resources; and a second compiler configured to compile the second subgraph to be executable on a second resource among the plurality of heterogeneous resources, wherein the first subgraph and the second subgraph are respectively managed through separate calculation paths. | 2020-05-14 |
20200151580 | GENERATING AND MANAGING DEEP TENSOR NEURAL NETWORKS - Techniques for generating and managing, including simulating and training, deep tensor neural networks are presented. A deep tensor neural network comprises a graph of nodes connected via weighted edges. A network management component (NMC) extracts features from tensor-formatted input data based on tensor-formatted parameters. NMC evolves tensor-formatted input data based on a defined tensor-tensor layer evolution rule, the network generating output data based on evolution of the tensor-formatted input data. The network is activated by non-linear activation functions, wherein the weighted edges and non-linear activation functions operate, based on tensor-tensor functions, to evolve tensor-formatted input data. NMC trains the network based on tensor-formatted training data, comparing output training data output from the network to simulated output data, based on a defined loss function, to determine an update. NMC updates the network, including weight and bias parameters, based on the update, by application of tensor-tensor operations. | 2020-05-14 |
20200151581 | OUTAGE PREVENTION IN AN ELECTRIC POWER DISTRIBUTION GRID USING SMART METER MESSAGING - A system and method is disclosed for using AMI smart meter messaging types and data mining decision trees to determine if local equipment failure is present. The system and method may be used to predict impending failure based upon smart meter message behaviors and to create proactive investigation tickets. The predictions models may be generated from a database of smart meter messaging and customer outage reports. The system and method can be applied to detect failures of higher level device equipment and may be incorporated into customer service processes. The system and method may also be used to determine customer owned equipment failures for referral to electricians. | 2020-05-14 |
20200151582 | ASCRIPTIVE AND DESCRIPTIVE ENTITIES FOR PROCESS AND TRANSLATION: A LIMITED ITERATIVE ONTOLOGICAL NOTATION - The present disclosure describes computer-implemented methods and systems for providing and maintaining a limited iterative ontological notation (FIG. | 2020-05-14 |
20200151583 | ATTENTIVE DIALOGUE CUSTOMER SERVICE SYSTEM AND METHOD - A method, system, and article. A non-transitory computer-readable storage medium may include computer-readable program code executable by a processor to: determine a set of customer concerns by a set of AI engines executing on the processor, the set of customer concerns based upon customer input from a customer, the customer input being received by one of: an interactive voice response system, a phone system, a short message service system, an email message, and a data network. The computer-readable program code may be executable to generate an acknowledgment of the set of customer concerns, by the AI engines executing on the processor; and perform a problem-solving cycle, based upon the set of customer concerns. The performing of the problem-solving cycle may include adapting a solution to an obstacle identified in the set of customer concerns; suggesting an alternative solution to the customer; and narrating a set of actions taken during the problem-solving cycle. | 2020-05-14 |
20200151584 | SYSTEMS AND METHODS FOR DETERMINING AN ARTIFICIAL INTELLIGENCE MODEL IN A COMMUNICATION SYSTEM - A device for obtaining a local optimal AI model may include an artificial intelligence (AI) chip and a processing device configured to receive a first initial AI model from the host device. The device may load the initial AI model into the AI chip to determine a performance value of the AI model based on a dataset, and determine a probability that a current AI model should be replaced by the initial AI model. The device may determine, based on the probability, whether to replace the current AI model with the initial AI model. If it is determined that the current AI model be replaced, the device may replace the current AI model with the initial AI model. The device may repeat the above processes and obtain a final current AI model. The device may transmit the final current AI model to the host device. | 2020-05-14 |
20200151585 | INFORMATION PROCESSING APPARATUS AND RULE GENERATION METHOD - An information processing apparatus includes: a memory; and a processor coupled to the memory and the processor configured to: acquire a plurality of sample videos; identify a position and time at which an attribute appears in each of the plurality of sample videos, the attribute being output by each of one or more pre-trained models to which each of the plurality of sample videos is input; cluster attribute labels based on the position and time of the attribute for each of the plurality of sample videos; and generate a rule by combining attribute labels included in a cluster having a highest frequency of appearance among cluster groups obtained for all of the plurality of sample videos. | 2020-05-14 |
20200151586 | ACTIVITY-BASED INFERENCE OF TITLE PREFERENCES - The disclosed embodiments provide a system for performing activity-based inference of title preferences. During operation, the system determines features and labels related to first title preferences for jobs sought by a first set of candidates. Next, the system inputs the features and the labels as training data for a machine learning model. The system then applies the machine learning model to additional features for a second set of candidates to produce predictions of second title preferences for the second set of candidates. Finally, the system stores the predictions in association with the second set of candidates. | 2020-05-14 |
20200151587 | ANALYZING VIEWER BEHAVIOR IN REAL TIME - Systems, methods and articles of manufacture for are provided for analyzing user behavior in real time by ingesting telemetry data related to a streaming media application; feeding the telemetry data to a machine learning model (MLM) that produces a User Experience (UX) command based on the telemetry data and prior telemetry data received from the content streaming application; selecting content items to provide to the client device based on the telemetry data; determining, based on the telemetry data, whether the client device has sufficient free resources to receive the UX command and the content items in a current time window while providing a predefined level of service; when client device has sufficient free resources to receive the UX command and the content items, encapsulating the UX command with the content items in a content stream; and transmitting the content stream to the client device. | 2020-05-14 |
20200151588 | DECLARATIVE DEBRIEFING FOR PREDICTIVE PIPELINE - Provided are systems and methods for auto-completing debriefing processing for a machine learning model pipeline based on a type of predictive algorithm. In one example, the method may include one or more of building a machine learning model pipeline via a user interface, detecting, via the user interface, a selection associated with a predictive algorithm included within the machine learning model pipeline, in response to the selection, identifying debriefing components for the predictive algorithm based on a type of the predictive algorithm from among a plurality of types of predictive algorithms, and automatically incorporating processing for the debriefing components within the machine learning model pipeline such that values of the debriefing components are generated during training of the predictive algorithm within the machine learning model pipeline. | 2020-05-14 |
20200151589 | Explainable and Automated Decisions in Computer-Based Reasoning Systems - The techniques herein include using an input context to determine a suggested action. One or more explanations may also be determined and returned along with the suggested action. The one or more explanations may include (i) one or more most similar cases to the suggested case (e.g., the case associated with the suggested action) and, optionally, a conviction score for each nearby cases; (ii) action probabilities, (iii) excluding cases and distances, (iv) archetype and/or counterfactual cases for the suggested action; (v) feature residuals; (vi) regional model complexity; (vii) fractional dimensionality; (viii) prediction conviction; (ix) feature prediction contribution; and/or other measures such as the ones discussed herein, including certainty. In some embodiments, the explanation data may be used to determine whether to perform a suggested action. | 2020-05-14 |
20200151590 | Explainable and Automated Decisions in Computer-Based Reasoning Systems - The techniques herein include using an input context to determine a suggested action. One or more explanations may also be determined and returned along with the suggested action. The one or more explanations may include (i) one or more most similar cases to the suggested case (e.g., the case associated with the suggested action) and, optionally, a conviction score for each nearby cases; (ii) action probabilities, (iii) excluding cases and distances, (iv) archetype and/or counterfactual cases for the suggested action; (v) feature residuals; (vi) regional model complexity; (vii) fractional dimensionality; (viii) prediction conviction; (ix) feature prediction contribution; and/or other measures such as the ones discussed herein, including certainty. In some embodiments, the explanation data may be used to determine whether to perform a suggested action. | 2020-05-14 |
20200151591 | INFORMATION EXTRACTION FROM DOCUMENTS - There is provided a method including sending a first document to a GUI, and receiving at a classification and extraction engine (CEE) from the GUI an input indicating first document data for the first document. The input forms a portion of a dataset. A prediction is generated at the CEE of second document data for a second document using a machine learning model (MLM) configured to receive an input and generate a predicted output. The MLM is trained using the dataset, the input includes one or more tokens corresponding to the second document. The output includes the prediction of the second document data. The prediction is sent to the GUI, and feedback on the prediction is received at the CEE from the GUI, to form a reviewed prediction. The reviewed prediction is added to the dataset to form an enlarged dataset, and the MLM is trained using the enlarged dataset. | 2020-05-14 |
20200151592 | BALLAST WEIGHT MANAGEMENT SYSTEM FOR A WORK VEHICLE - A work vehicle includes: a chassis; a first axle carried by the chassis; a pair of first wheels rotatably coupled to the first axle; a first weight sensor associated with the first axle and configured to output a first weight signal; a second axle carried by the chassis; a pair of second wheels rotatably coupled to the second axle; a second weight sensor associated with the second axle and configured to output a second weight signal; and a controller operatively coupled to the first and second weight sensors. The controller is configured to: receive the first and second weight signals; determine a weight distribution of the work vehicle based on the received first and second weight signals; analyze the determined weight distribution to determine at least one recommended operating parameter; and output a recommendation signal based on the at least one recommended operating parameter. | 2020-05-14 |
20200151593 | STATE JUDGMENT DEVICE AND STATE JUDGMENT METHOD - A state judgment device includes: a data acquisition unit which acquires data related to an industrial machine; an energy state calculation unit which calculates an energy state related to driving of units of the industrial machine on the basis of the data related to the industrial machine acquired by the data acquisition unit; and an abnormal state estimation unit which estimates, on the basis of the energy state related to driving of the units of the industrial machine calculated by the energy state calculation unit, whether operation of the industrial machine is normal or abnormal. | 2020-05-14 |
20200151594 | FOOT DIFFERENTIATION SCORING - Methods, systems, and non-transitory computer readable medium for foot differentiation scoring pertaining to an individual's differences in their two feet. A method includes receiving, from sensors of a scanning device, pressure and/or other measurement data corresponding to feet of a user. The method further includes preprocessing the pressure and/or measurement data. The method further includes generating, based on the preprocessed pressure and/or measurement data, a foot differentiation score based on differences between a left foot and a right foot of the user. The foot differentiation score may assign or correspond to a numerical rating based on how different the user's left foot and right foot are from each other. | 2020-05-14 |
20200151595 | AUTOMATED TRAINING AND EXERCISE ADJUSTMENTS BASED ON SENSOR-DETECTED EXERCISE FORM AND PHYSIOLOGICAL ACTIVATION - The invention(s) described are configured to process sensor data in order to optimize or otherwise improve training of users for achievement of goals in relation to performing an activity. The invention(s) can also iteratively adapt training in a personalized manner, with assessment of training results and subsequent modification of training regimens, in order to provide improved alignment between users and their goals. Such iteration can drive interventions provided to users throughout the course of training, and allow the system to iteratively develop better and more precise interventions (e.g., through manual means, through machine learning models with generated training and test data). Such iteration, with large datasets applied to populations of users can also increase the breadth of user states that the can be addressed, with respect to provided interventions, and improve rates at which interventions are provided. | 2020-05-14 |
20200151596 | ENTITY RECOGNITION SYSTEM BASED ON INTERACTION VECTORIZATION - An interaction prediction system for accurately predicting the occurrence of interactions, entities associated with the interactions, and/or resources involved with the interactions. The interaction predictions can be used for a number of different purposes, such as improving security of systems, predicting future interactions or the likelihood thereof, or the like. The interaction prediction system described herein more accurately predict the interactions using modeling and monitoring that increases the processing speeds by reducing the data needed to make the predictions, reduces the memory requirements to make the predictions, and increases the capacity of the processing systems when compared to traditional systems. | 2020-05-14 |
20200151597 | ENTITY RESOURCE RECOMMENDATION SYSTEM BASED ON INTERACTION VECTORIZATION - An interaction prediction system for accurately predicting the occurrence of interactions, entities associated with the interactions, and/or resources involved with the interactions. The interaction predictions can be used for a number of different purposes, such as improving security of systems, predicting future interactions or the likelihood thereof, or the like. The interaction prediction system described herein more accurately predict the interactions using modeling and monitoring that increases the processing speeds by reducing the data needed to make the predictions, reduces the memory requirements to make the predictions, and increases the capacity of the processing systems when compared to traditional systems. | 2020-05-14 |
20200151598 | Explainable and Automated Decisions in Computer-Based Reasoning Systems - The techniques herein include using an input context to determine a suggested action. One or more explanations may also be determined and returned along with the suggested action. The one or more explanations may include (i) one or more most similar cases to the suggested case (e.g., the case associated with the suggested action) and, optionally, a conviction score for each nearby cases; (ii) action probabilities, (iii) excluding cases and distances, (iv) archetype and/or counterfactual cases for the suggested action; (v) feature residuals; (vi) regional model complexity; (vii) fractional dimensionality; (viii) prediction conviction; (ix) feature prediction contribution; and/or other measures such as the ones discussed herein, including certainty. In some embodiments, the explanation data may be used to determine whether to perform a suggested action. | 2020-05-14 |
20200151599 | SYSTEMS AND METHODS FOR MODELLING PREDICTION ERRORS IN PATH-LEARNING OF AN AUTONOMOUS LEARNING AGENT - Systems and methods for modelling prediction errors in path-learning of an autonomous learning agent are provided. The traditional systems and methods provide for machine learning techniques, wherein estimation of errors in prediction is reduced with an increase in the number of path-iterations of the autonomous learning agent. Embodiments of the present disclosure provide for a two-stage modelling technique to model the prediction errors in the path-learning of the autonomous learning agent, wherein the two-stage modelling technique comprises extracting a plurality of fitted error values corresponding to a plurality of predicted actions and actual actions by implementing an Autoregressive moving average (ARMA) technique on a set of prediction error values; and estimating, by implementing a linear regression technique on the plurality of fitted error values, a probable deviation of the autonomous learning agent from each of an actual action amongst a plurality of predicted and actual actions. | 2020-05-14 |
20200151600 | PREDICTION OF OUT OF SPECIFICATION BASED ON A SPATIAL CHARACTERISTIC OF PROCESS VARIABILITY - A method for determining a probabilistic model configured to predict a characteristic (e.g., defects, CD, etc.) of a pattern of a substrate subjected to a patterning process. The method includes obtaining a spatial map of a distribution of a residue corresponding to a characteristic of the pattern on the substrate, determining a zone of the spatial map based on a variation of the distribution of the residue within the spatial map, and determining the probabilistic model based on the zone and the distribution of the residue values or the values of the characteristic of the pattern on the substrate within the zone. | 2020-05-14 |
20200151601 | User Identification with Voiceprints on Online Social Networks - In one embodiment, a method includes, by one or more computing devices of an online social network, receiving, from a client system of a first user and from a second user, a biometric input used to identify the second user, sending, to the client system, a personal identifier for presentation to the second user, receiving, from the client system in response to the presentation of the personal identifier to the second user, an audio input from the second user, determining, based on a comparison of the audio input to a voiceprint of the second user, wherein the voiceprint comprises audio data for auditory identification of the second user, whether the audio input comprises the personal identifier spoken by the second user, and authenticating the second user to access an online account associated with the second user via the client system if the audio input is determined to be spoken by the second user and comprise the personal identifier spoken by the second user. | 2020-05-14 |
20200151602 | DISPERSIVE-RESISTIVE HYBRID ATTENUATOR FOR QUANTUM MICROWAVE CIRCUITS - A resistive component in a hybrid microwave attenuator circuit is configured to attenuate a plurality of frequencies in an input signal. The hybrid microwave attenuator circuit is further configured with a dispersive component to attenuate a second plurality of frequencies within a frequency range by reflecting off portions of the input signal at those frequencies that are within the frequency range. The resistive component and the dispersive component are arranged in a series configuration relative to one another in the hybrid microwave attenuator circuit. | 2020-05-14 |
20200151603 | DISTRIBUTABLE EVENT PREDICTION AND MACHINE LEARNING RECOGNITION SYSTEM - A computing device predicts occurrence of an event or classifies an object using distributed unlabeled data. A Laplacian matrix is computed using a kernel function. A predefined number of eigenvectors is selected from a decomposed Laplacian matrix to define a decomposition matrix. A gradient value is computed as a function of the defined decomposition matrix, a plurality of sparse coefficients, and a label matrix, a value of each coefficient of the plurality of sparse coefficients is updated based on the computed gradient value, and the computations are repeated until a convergence parameter value indicates the plurality of sparse coefficients have converged. A classification matrix is defined using the plurality of sparse coefficients to determine the target variable value for each observation vector of the plurality of unclassified observation vectors. The target variable value for each observation vector of the plurality of unclassified observation vectors is output. | 2020-05-14 |
20200151604 | Apparatus and Method of Implementing Batch-Mode Active Learning for Technology-Assisted Review of Documents - The present disclosure relates to the electronic document review field and, more particularly, to various apparatuses and methods of implementing batch-mode active learning for technology-assisted review (TAR) of documents (e.g., legal documents). | 2020-05-14 |
20200151605 | Apparatus and Method of Implementing Enhanced Batch-Mode Active Learning for Technology-Assisted Review of Documents - The present disclosure relates to the electronic document review field and, more particularly, to various apparatuses and methods of implementing batch-mode active learning for technology-assisted review (TAR) of documents (e.g., legal documents). | 2020-05-14 |
20200151606 | DYNAMICALLY SCALED TRAINING FLEETS FOR MACHINE LEARNING - A first set of execution platforms is deployed for a set of operations of a training phase of a machine learning model. Prior to the completion of the training phase, a triggering condition for deployment of a different set of execution platforms is detected. The different set of execution platforms is deployed for a subsequent set of training phase operations. | 2020-05-14 |
20200151607 | LAT Based Answer Generation Using Anchor Entities and Proximity - Mechanisms are provided for implementing a proximity based candidate answer pre-processor engine that outputs a sub-set of candidate answers to a question and answer (QA) system. The mechanisms receive a lexical answer type (LAT) and an entity specified in an input natural language question as well as an ontology data structure representing a corpus of natural language content. The mechanisms identify a set of candidate answers having associated nodes in the ontology data structure that are within a predetermined proximity of a node corresponding to the entity, and a sub-set of candidate answers in the set of candidate answers having an entity type corresponding to the LAT. The mechanisms output, to the QA system, the sub-set of candidate answers as candidate answers to the input natural language question for evaluation and selection of a final answer to the input natural language question. | 2020-05-14 |
20200151608 | MERGING FEATURE SUBSETS USING GRAPHICAL REPRESENTATION - A system, method and computer program product provides improved performance in machine learning, decision making and similar processes. In one example method, a plurality of individual subsets of features of a dataset comprising multiple features are received. The subsets may be provided by applying one or more feature selection methods to the dataset. Each subset is represented as a graph based on a predefined graph template. The example method merges the graphs of the plurality of individual subsets by overlaying the graphs on each other to form a merged feature graph. The merged feature graph may be used for identifying a single subset of features for use in machine learning, decision making and similar processes. | 2020-05-14 |
20200151609 | MULTI DIMENSIONAL SCALE ANALYSIS USING MACHINE LEARNING - The disclosure provides an approach for collecting system state data relating to whether certain system states overload a processor assigned to a controller of the system. The approach further involves using the collected data to train a regression machine learning algorithm to predict whether indented or desired system states will result in processor overload. Depending on the prediction, the approach takes one of several steps to efficiently change system state. | 2020-05-14 |
20200151610 | ENSEMBLE LEARNING PREDICTING METHOD AND SYSTEM - An ensemble learning prediction method includes: establishing a plurality of base predictors based on a plurality of training data; initializing a plurality of sample weights of a plurality of sample data and initializing a processing set; in each iteration round, based on the sample data and the sample weights, establishing a plurality of predictor weighting functions of the predictors in the processing set and predicting each of the sample data by each of the predictors in the processing set for identifying a prediction result; evaluating the predictor weighting functions, and selecting a respective target predictor weighting function from the predictor weighting functions established in each iteration round and selecting a target predictor from the predictors in the processing set to update the processing set and to update the sample weights of the sample data. | 2020-05-14 |
20200151611 | Machine-Learned Model System - Provided are methods, systems, devices, and tangible non-transitory computer readable media for providing data associated with a machine-learned model library. The disclosed technology can perform operations including providing a machine-learned GP model library that includes a plurality of machine-learned models trained to generate semantic observations based on sensor data associated with a vehicle. Each machine-learned model of the plurality of machine-learned models can be associated with one or more configurations supported by each machine-learned model. A request for a machine-learned model from the machine-learned model library can be received a remote computing device. Furthermore, based on the request, the machine-learned model can be provided to the remote computing device. | 2020-05-14 |
20200151612 | Blood Flow Measurement Apparatus Using Doppler Ultrasound And Method Of Operating The Same - Disclosed is a blood flow measurement apparatus using Doppler ultrasound. The apparatus includes a two-dimensional transducer array in which a plurality of transducers are two-dimensionally arranged, an acoustic window detection portion configured to transmit and receive ultrasonic signals by driving some of the plurality of transducers, to detect Doppler signals, and to confirm a transducer corresponding to a Doppler signal having high intensity among the detected Doppler signals, a blood flow detection portion configured to detect Doppler signals with respect to a plurality of steering vectors through beam steering using a plurality of adjacent transducers including the confirmed transducer and configured to confirm a steering vector corresponding to a Doppler signal having highest intensity among the detected Doppler signals, and a Doppler processing portion configured to detect a Doppler signal by performing beam steering using the confirmed steering vector and to obtain blood flow information from the detected Doppler signal. | 2020-05-14 |
20200151613 | METHOD AND APPARATUS FOR MACHINE LEARNING - A machine learning method that may reduce an annotation cost and may improve performance of a target model is provided. Some embodiments of the present disclosure may provide a machine learning method performed by a computing device, including: acquiring a training dataset of a first model including a plurality of data samples to which label information is not given; calculating a miss-prediction probability of the first model on the plurality of data samples; configuring a first data sample group by selecting at least one data sample from the plurality of data samples based on the calculated miss-prediction probability; acquiring first label information on the first data sample group; and performing first learning on the first model by using the first data sample group and the first label information. | 2020-05-14 |
20200151614 | Using Template Exploration for Large-Scale Machine Learning - Systems and techniques are provided for template exploration in a large-scale machine learning system. A method may include obtaining multiple base templates, each base template comprising multiple features. A template performance score may be obtained for each base template and a first base template may be selected from among the multiple base templates based on the template performance score of the first base template. Multiple cross-templates may be constructed by generating a cross-template of the selected first base template and each of the multiple base templates. Performance of a machine learning model may be tested based on each cross-template to generate a cross-template performance score for each of the cross-templates. A first cross-template may be selected from among the multiple cross-templates based on the cross-template performance score of the cross-template. Accordingly, the first cross-template may be added to the machine learning model. | 2020-05-14 |
20200151615 | MACHINE LEARNING BASED PROCESS FLOW ENGINE - A method for machine-learning based process flow recommendation is provided. The method may include training a machine-learning model by at least processing training data with the machine-learning model. The training data may include a matrix representing one or more existing process flows by at least indicating actions that are performed on a document object to generate a subsequent document object. An indication that a first document object is created as part of a process flow may be received. In response to the indication, the trained machine-learning model may be applied to generate a recommendation to perform, as part of the process flow, an action to generate a second document object. Related systems and articles of manufacture, including computer program products, are also provided. | 2020-05-14 |
20200151616 | MERGING AND OPTIMIZING HETEROGENEOUS RULESETS FOR DEVICE CLASSIFICATION - In one embodiment, a device classification service receives a plurality of device classification rulesets, each ruleset associating a set of device characteristics with a device type label. The device classification service forms a unified ruleset by resolving a conflict between conflicting device characteristics from two or more of the device classification rulesets. The device classification service trains a machine learning-based device classifier using the unified ruleset. The device classification service classifies, using telemetry data for a device in a network as input to the trained device classifier, the device with the device type label. | 2020-05-14 |
20200151617 | SYSTEMS AND METHODS FOR MACHINE GENERATED TRAINING AND IMITATION LEARNING - Embodiments described include systems and methods for generating training content for completion of tasks. The method includes receiving, from each of a plurality of client applications, interactions recorded by the client application via an embedded browser of the client application. The method includes classifying the interactions received from each client application into one or more tasks. The method includes selecting, for a first task of the one or more tasks, from the interactions classified into the first task, a subset of interactions to be included in a training content including a recorded example of performing the first task across the one or more network application. The method includes generating the training content configured to be transmitted to client applications responsive to receiving a request related to the first task. | 2020-05-14 |
20200151618 | THERMALLY-COMPENSATED PROGNOSTIC-SURVEILLANCE TECHNIQUE FOR CRITICAL ASSETS IN OUTDOOR ENVIRONMENTS - During operation, the system obtains time-series sensor signals gathered from sensors in an asset during operation of the asset in an outdoor environment, wherein the time-series sensor signals include temperature signals. Next, the system produces thermally-compensated time-series sensor signals by performing a thermal-compensation operation on the temperature signals to compensate for variations in the temperature signals caused by dynamic variations in an ambient temperature of the outdoor environment. The system then trains a prognostic inferential model for a prognostic pattern-recognition system based on the thermally-compensated time-series sensor signals. During a surveillance mode for the prognostic pattern-recognition system, the system receives recently-generated time-series sensor signals from the asset, and performs a thermal-compensation operation on temperature signals in the recently-generated time-series sensor signals. Finally, the system applies the prognostic inferential model to the thermally-compensated, recently-generated time-series sensor signals to detect incipient anomalies that arise during operation of the asset. | 2020-05-14 |
20200151619 | SYSTEMS AND METHODS FOR DETERMINING MACHINE LEARNING TRAINING APPROACHES BASED ON IDENTIFIED IMPACTS OF ONE OR MORE TYPES OF CONCEPT DRIFT - A system and method for accounting for the impact of concept drift in selecting machine learning training methods to address the identified impact. Pattern recognition is performed on performance metrics of a deployed production model in an Internet-of-Things (IoT) environment to determine the impact that concept drift (data drift) has had on prediction performance. This concurrent analysis is utilized to select one or more approaches for training machine learning models, thereby accounting for the temporal dynamics of concept drift (and its subsequent impact on prediction performance) in a faster and more efficient manner. | 2020-05-14 |
20200151620 | MISTAKEN MESSAGE PREVENTION BASED ON MULTIPLE CLASSIFICATION LAYERS - In an approach to detecting the transmission of messages, analyzing said messages, calculating a message risk score and transmitting a warning notification, one or more computer processors detect transmission of a message from a user to a selected recipient. The one or more computer processors extract message information from the detected message. The one or more computer processors retrieve one or more historical conversations between the user and the selected recipient of the detected message. The one or more computer processors determine a risk score corresponding to sending the detected message to the selected recipient based on applying the extracted message information and the retrieved historical conversations to a cognitive model. | 2020-05-14 |
20200151621 | SYSTEM AND METHOD FOR PREPARING COMPUTING DEVICES IN ANTICIPATION OF PREDICTED USER ARRIVAL - An embodiment of the invention may include a method, computer program product and system for computing device management. An embodiment may include preparing, by an estimated arrival time of a user at a location, at least one computing device needed by the user to perform a computing task at the location. | 2020-05-14 |
20200151622 | LEARNING CRITICALITY OF MISCLASSIFICATIONS USED AS INPUT TO CLASSIFICATION TO REDUCE THE PROBABILITY OF CRITICAL MISCLASSIFICATION - In one embodiment, a device classification service that uses a machine learning-based device type classifier to classify endpoint devices with device types, identifies a set of device types having similar associated traffic telemetry features. The service obtains, via one or more user interfaces, feedback indicative of whether the device type classifier misclassifying an endpoint device having a particular device type in the set with another device type in the set would be a critical misclassification. The service trains, using the obtained feedback, a prediction model to predict an impact of misclassifying the particular device type as one of the other device types in the set of device types. The service also retrains the machine learning-based device type classifier based on a prediction from the prediction model. | 2020-05-14 |
20200151623 | N- BEST SOFTMAX SMOOTHING FOR MINIMUM BAYES RISK TRAINING OF ATTENTION BASED SEQUENCE-TO-SEQUENCE MODELS - A method and apparatus are provided that analyzing sequence-to-sequence data, such as sequence-to-sequence speech data or sequence-to-sequence machine translation data for example, by minimum Bayes risk (MBR) training a sequence-to-sequence model and within introduction of applications of softmax smoothing to an N-best generation of the MBR training of the sequence-to-sequence model. | 2020-05-14 |