49th week of 2020 patent applcation highlights part 53 |
Patent application number | Title | Published |
20200380298 | Text-to-Visual Machine Learning Embedding Techniques - Text-to-visual machine learning embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. These techniques include use of query-based training data which may expand availability and types of training data usable to train a model. Generation of negative digital image samples is also described that may increase accuracy in training the model using machine learning. A loss function is also described that also supports increased accuracy and computational efficiency by losses separately, e.g., between positive or negative sample embeddings a text embedding. | 2020-12-03 |
20200380299 | Recognizing People by Combining Face and Body Cues - Categorizing images includes obtaining a first plurality of images captured during a first timeframe, determining a vector representation comprising face characteristics and body characteristics of each of the people in each of the first plurality of images, and clustering the first plurality of vector representations. Categorizing images also includes obtaining a second plurality of images captured during a second timeframe, determining a vector representation comprising face characteristics and body characteristics for each person in each of the second plurality of images, and clustering the second plurality vector representations. Finally, a representative face vector is obtained from each of the first clusters and the second clusters based on the face characteristics and not the body characteristics, and identifying common people of the one or more people in the first plurality of images and the second plurality of images based on the representative face vectors. | 2020-12-03 |
20200380300 | SYSTEMS AND METHODS FOR ADVERSARIALLY ROBUST OBJECT DETECTION - Described herein are embodiments for an approach to improve the robustness of an object detector against adversarial attacks. Existing attacks for object detectors and the impacts of individual task component on model robustness are systematically analyzed from a multi-task view of object detection. In one or more embodiments, a multi-task learning perspective of object detection is introduced and an asymmetric role of task losses is identified. One or more embodiments of an adversarial training method for robust object detection are presented to leverage the multiple sources of attacks for improving the robustness of object detection models. | 2020-12-03 |
20200380301 | TECHNIQUES FOR MACHINE LANGUAGE MODEL CREATION - Embodiments of the present disclosure present devices, methods, and computer readable medium for techniques for creating machine learning models. Application developers can select a machine learning template from a plurality of templates appropriate for the type of data used in their application. Templates can include multiple templates for classification of images, text, sound, motion, and tabular data. A graphical user interface allows for intuitive selection of training data, validation data, and integration of the trained model into the application. The techniques further display a numerical score for both the training accuracy and validation accuracy using the test data. The application provides a live mode that allows for execution of the machine learning model on a mobile device to allow for testing the model from data from one or more of the sensors (i.e., camera or microphone) on the mobile device. | 2020-12-03 |
20200380302 | DATA AUGMENTATION SYSTEM, DATA AUGMENTATION METHOD, AND INFORMATION STORAGE MEDIUM - Provided is a data augmentation system including at least one processor, the at least one processor being configured to: input, to a machine learning model configured processor to perform recognition, input data; identify a feature portion of the input data to serve as a basis for recognition by the machine learning model in which the input data is used as input; acquire processed data by processing at least a part of the feature portion; and perform data augmentation based on the processed data. | 2020-12-03 |
20200380303 | OBJECT-ORIENTED MACHINE LEARNING GOVERNANCE - Provided is a process including: writing, with a computing system, a first plurality of classes using object-oriented modelling of modelling methods; writing, with the computing system, a second plurality of classes using object-oriented modelling of governance; scanning, with the computing system, a set of libraries collectively containing both modelling object classes among the first plurality of classes and governance classes among the second plurality of classes to determine class definition information; using, with the computing system, at least some of the class definition information to produce object manipulation functions, wherein the object manipulation functions allow a governance system to access methods and attributes of classes among first plurality of classes or the second plurality of classes to manipulate objects of at least some of the modelling object classes; and using at least some of the class definition information to effectuate access to the object manipulation functions. | 2020-12-03 |
20200380304 | LABELING USING INTERACTIVE ASSISTED SEGMENTATION - Subject matter regards improving image segmentation or image annotation. A method can include receiving, through a user interface (UI), for each class label of class labels to be identified by the ML model and for a proper subset of pixels of the image data, data indicating respective pixels associated with the class label, partially training the ML model based on the received data, generating, using the partially trained ML model, pseudo-labels for each pixel of the image data for which a class label has not been received, and receiving, through the UT, a further class label that corrects a pseudo-label of the generated pseudo-labels. | 2020-12-03 |
20200380305 | TRAINING AND VERIFICATION OF LEARNING MODELS USING HIGH-DEFINITION MAP INFORMATION AND POSITIONING INFORMATION - Methods, systems, and devices for training and verification of learning models are described. A device may capture a camera frame including a road feature of a physical environment, determine a first classification and a first localization of the road feature based on positioning information of the road feature from a high-definition map and positioning information of the device from a positioning engine. The device may analyze a learning model by comparing one or more of the first classification of the road feature or the first localization of the road feature in the camera frame to one or more of a second classification or a second localization of the road feature determined by the learning model. The device may then determine a loss comparison value and adapt the learning model according to the loss comparison value. | 2020-12-03 |
20200380306 | SYSTEM AND METHOD FOR IMPLEMENTING NEURAL NETWORK MODELS ON EDGE DEVICES IN IOT NETWORKS - A method and a system for implementing neural network models on edge devices in an Internet of Things (IoT) network are disclosed. In an embodiment, the method may include receiving a neural network model trained and configured to detect objects from images, and iteratively assigning a new value to each of a plurality of parameters associated with the neural network model to generate a re-configured neural network model in each iteration. The method may further include deploying for a current iteration the re-configured neural network on the edge device. The method may further include computing for the current iteration, a trade-off value based on a detection accuracy associated with the at least one object detected in the image and resource utilization data associated with the edge device, and selecting the re-configured neural network model, based on the trade-off value calculated for the current iteration. | 2020-12-03 |
20200380307 | TECHNIQUES FOR DERIVING AND/OR LEVERAGING APPLICATION-CENTRIC MODEL METRIC - Techniques for quantifying accuracy of a prediction model that has been trained on a data set parameterized by multiple features are provided. The model performs in accordance with a theoretical performance manifold over an intractable input space in connection with the features. A determination is made as to which of the features are strongly correlated with performance of the model. Based on the features determined to be strongly correlated with performance of the model, parameterized sub-models are created such that, in aggregate, they approximate the intractable input space. Prototype exemplars are generated for each of the created sub-models, with the prototype exemplars for each created sub-model being objects to which the model can be applied to result in a match with the respective sub-model. The accuracy of the model is quantified using the generated prototype exemplars. A recommendation engine is provided for when there are particular areas of interest. | 2020-12-03 |
20200380308 | TECHNIQUES FOR DERIVING AND/OR LEVERAGING APPLICATION-CENTRIC MODEL METRIC - Techniques for recommending a prediction model from among a number of different prediction models are provided. Each of these prediction models has been trained based on a respective training data set, and each performs in accordance with a respective theoretical performance manifold. An indication of a region definable in relation to the theoretical performance manifolds of the different prediction models is received as input. For each of the different prediction models, the indication of the region is linked to features parameterizing the respective performance manifold; and one or more portions of the respective performance manifold is/are identified based on the features determined by the linking, the portion(s) having a volume and a shape that collectively denote an expected performance of the respective model for the input. The expected performance of the prediction models for the input is compared. Based on the comparison, one or more of the models is/are suggested. | 2020-12-03 |
20200380309 | Method and System of Correcting Data Imbalance in a Dataset Used in Machine-Learning - A method and system for correcting imbalanced distribution of data that may signal bias in a dataset associated with training a machine-learning (ML) model includes receiving a request to perform a data imbalance correction on a dataset associated with training a machine-learning (ML) model, identifying a feature of the dataset for which data imbalance correction is to be performed, identifying a desired distribution for the identified feature, selecting a subset of the dataset that corresponds with the selected feature and the desired distribution, and using the subset to train a ML model. | 2020-12-03 |
20200380310 | Method and System of Performing Data Imbalance Detection and Correction in Training a Machine-Learning Model - A method and system for performing semi or fully automatic data imbalance detection and correction in training a machine-learning (ML) model includes receiving a request to train the ML model, receiving access to a dataset for use in training the ML model, identifying a feature of the dataset for which data imbalance detection is to be performed, examining the dataset to determine a distribution of the feature across the dataset, determining if the distribution of the feature across the dataset indicates data imbalance, upon determining that the distribution of the feature across the dataset indicates data imbalance, identifying a desired distribution for the identified feature, selecting a subset of the dataset that corresponds with the selected feature and the desired distribution, and using the subset to train the ML model. | 2020-12-03 |
20200380311 | Collaborative Information Extraction - Embodiments relate to a system, program product, and method for information extraction and annotation of a data set. Neural models are utilized to automatically attach machine annotations to data elements within an unlabeled data set. The attached machine annotations are evaluated and a score is attached to the annotations. The score reflects a confidence of correctness of the annotations. A labeled data set is iteratively expanded with selectively evaluated annotations based on the attached score. The labeled data set is applied to an unexplored corpus to identify matching corpus data to populated instances of the labeled data set. | 2020-12-03 |
20200380312 | METHOD AND SYSTEM FOR DYNAMICALLY ANNOTATING AND VALIDATING ANNOTATED DATA - This disclosure relates to method and system for of dynamically annotating data or validating annotated data. The method may include receiving input data comprising a plurality of input data points. The method may further include one of: a) generating a plurality of annotations for each of the plurality of input data points using at least one of a state-label mapping model and a comparative ANN model, or b) receiving the plurality of annotations for each of the plurality of input data points from an external device or from a user, and validating the plurality of annotations using at least one of the state-label mapping model and the comparative artificial neural network (ANN) model. | 2020-12-03 |
20200380313 | MACHINE LEARNING DEVICE AND METHOD - Provided is a machine learning device and method that enables machine learning of labeling, in which a plurality of labels are attached to volume data at one effort with excellent accuracy, using training data having label attachment mixed therein. | 2020-12-03 |
20200380314 | INTERFERENCE IDENTIFICATION DEVICE AND INTERFERENCE IDENTIFICATION METHOD - An interference identification device according to the present invention includes a feature calculation unit that calculates, using an electromagnetic wave received during a sample data analysis length, at least one type of feature of the electromagnetic wave, an interference identification unit that identifies a cluster to which the at least one type of feature belongs, among a plurality of clusters, each of the plurality of clusters having a region defined in a cluster space in which one type of feature corresponds to one dimension, and a sample data analysis length update unit that updates the sample data analysis length based on a distance, in the cluster space, between the at least one type of feature and the cluster. | 2020-12-03 |
20200380315 | COMPUTER VISION CLASSIFIER USING ITEM MICROMODELS - Methods, storage media, and systems are disclosed. Exemplary implementations may: provide an encoding derived from an image of an item; provide a set of networks; individually apply the encoding to each network in the set of networks; generate, in response to the applying, a set of classification probabilities; identify a specific network in the set of networks which generated the largest classification probability in the set of classification probabilities; and associate the item with a specific class from the set of classes. | 2020-12-03 |
20200380316 | OBJECT HEIGHT ESTIMATION FROM MONOCULAR IMAGES - Systems and methods for estimating a height of an object from a monocular image are described herein. Objects are detected in the image, each object being indicated by a region of interest. The image is then cropped for each region of interest and the cropped image scaled to a predetermined size. The cropped and scaled image is then input into a convolutional neural network (CNN), the output of which is an estimated height for the object. The height may be represented by a mean of a probability distribution of possible sizes, a standard deviation, as well as a level of confidence. A location of the object may be determined based on the estimated height and region of interest. A ground truth dataset may be generated for training the CNN by simultaneously capturing a LIDAR sequence with a monocular image sequence. | 2020-12-03 |
20200380317 | Method, System and Apparatus for Gap Detection in Support Structures with Peg Regions - A method of detecting gaps on a support structure includes: obtaining, at an imaging controller, (i) a plurality of depth measurements representing the support structure according to a common frame of reference, and (ii) a plurality of label indicators each defining a label position in the common frame of reference; for each of the label indicators: classifying the label indicator as either a peg label or a shelf label, based on a portion of the depth measurements selected according to the label position and a portion of the depth measurements adjacent to the label position; generating an item search space in the common frame of reference according to the class of the label indicator; and determining, based on a subset of the depth measurements within the item search space, whether the item search space contains an item. | 2020-12-03 |
20200380318 | NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR STORING ANALYSIS PROGRAM, ANALYSIS APPARATUS, AND ANALYSIS METHOD - An analysis method implemented by a computer includes: generating a refine image by changing an incorrect inference image such that a correct label score of inference is maximized, the incorrect inference image being an input image when an incorrect label is inferred in an image recognition process; and narrowing, based on a score of a label, a predetermined region to specify an image section that causes incorrect inference, the score of the label being inferred by inputting to an inferring process an image obtained by replacing the predetermined region in the incorrect inference image with the refine image. | 2020-12-03 |
20200380319 | SYSTEM AND METHOD FOR FACILITATING GRAPHIC-RECOGNITION TRAINING OF A RECOGNITION MODEL - In certain embodiments, training of a prediction model (e.g., recognition or other prediction model) may be facilitated via a training set based on one or more logos or other graphics. In some embodiments, graphics information associated with a logo or graphic (e.g., to be recognized via a recognition model) may be obtained. Media items (e.g., images, videos, etc.) may be generated based on the graphics information, where each of the media items includes (i) content other than the logo and (ii) a given representation of the logo integrated with the other content. In some embodiments, the media items may be processed via the recognition model to generate predictions (related to recognition of the logo or graphic for the media items). The recognition model may be updated based on (i) the generated predictions and (ii) corresponding reference indications (related to recognition of the logo for the media items). | 2020-12-03 |
20200380320 | SEMANTIC IMAGE RETRIEVAL - Computer-implemented techniques for sematic image retrieval. According to one technique, digital images are classified into N number of categories based on their visual content. The classification provides a set of N-dimensional image vectors for the digital images. Each image vector contains up to N number of probability values for up to N number of corresponding categories. An N-dimensional image match vector is generated that projects an input keyword query into the vector space of the set of image vectors by computing the vector similarities between a word vector for the input query and a word vector for each of the N number of categories. Vector similarities between the image match vectors and the set of image vectors can be computed to determine images semantically relevant to the input query. | 2020-12-03 |
20200380321 | PRINTING APPARATUS AND PRINTING CONTROL METHOD - The invention provides a printing apparatus, a printing control method, and a processing apparatus, which suppress the occurrence of rework after a start of a printing operation and suppress an output of a printing result not intended by a user due to a mismatch between a state of expansion or contraction and print settings. To this end, printing is performed in print processing by determining whether or not an operable state is established which is a state of expanding a housing to a second position. | 2020-12-03 |
20200380322 | AUTHENTICATION METHOD AND SYSTEM USING A TWO-DIMENSIONAL BARCODE SEQUENCE - An authentication method and an authentication system are provided. The authentication system includes a display device and an authentication device. In the authentication method, at first, two-dimensional (2D) barcodes and a time sequence of the two-dimensional barcodes are provided in accordance with an authentication code. Then, the two-dimensional barcodes are displayed in accordance with the time sequence by using the display device. Thereafter, the two-dimensional barcodes displayed on the display device are captured by the authentication device. Then, a decoding process is performed in accordance with the two-dimensional barcodes and the time sequence to obtain the authentication code. Thereafter, an authentication process is performed by using the authentication code. | 2020-12-03 |
20200380323 | DUAL-SIDED PRODUCT PLACEMENT AND INFORMATION STRIPS - A single dual-sided product placement and information strip includes a first side with product information for consumers and a second side with product placement information for placing products on a display shelf. The dual-sided product placement and information strip enables information necessary for consumers to be printed on the first (consumer-facing) side and information that assists stockers to place products on shelves to be printed on the second (opposing) side. Also disclosed are systems and methods for formatting and printing the dual-sided product placement and information strips. The single dual-sided product placement and information strip may be printed alone or in a single sheet along with one or more other dual-sided product placement and information strips. | 2020-12-03 |
20200380324 | SYSTEM AND APPARATUS FOR ENCRYPTED DATA COLLECTION USING RFID CARDS - A secure smart card is described. The smart card can include a processor, a memory and a transceiver. The smart card can communicate with various terminals and store a digital signature and other information on the card. Another terminal can validate the information stored on the smart card using the digital signature. In certain embodiments, the terminal can also validate the information by using a blockchain. The advanced design of the smart card obviates the need for a network connection. | 2020-12-03 |
20200380325 | ACTIVE AND PASSIVE ASSET MONITORING SYSTEM - Methods and systems for providing an asset communication system are described. One asset communication system includes an active communication subsystem including a first radio transceiver, a passive communication subsystem including a second radio transceiver configured to transmit and receive data using radio waves for communication and power, and a sensory subsystem. The sensory subsystem can include one or more sensors, for example, an ambient environment sensor. The asset communication system further includes a synchronous trigger controller for activating the active communication subsystem according to a schedule, and an asynchronous trigger controller for activating the active communication subsystem based on a signal received from a sensor or the second radio transceiver. | 2020-12-03 |
20200380326 | INTELLIGENT TRACKING SYSTEM AND METHODS AND SYSTEMS THEREFOR - According to some embodiments of the present disclosure, an intelligent tracking system is disclosed. The intelligent tracking system includes one or more passive tracking devices, an exciter, and a tracker. Each passive tracking device includes one or more transceivers and is energized by an electromagnetic frequency. In response to being energized each passive tracking device transmits a short message. The exciter emits the electromagnetic frequency. The tracker receives short messages from the one or more passive tracking devices and confirms the presence of the one or more passive tracking devices in a vicinity of the tracker based on the received messages. | 2020-12-03 |
20200380327 | RFID TAG - An RFID tag is provided that includes an RFIC chip having a first connection terminal and a second connection terminal, a first electrode electrically connected to the first connection terminal of the RFIC chip, a capacitance element connected in series to the first electrode and the RFIC chip, and short-circuit parts connecting the first electrode and a ground at an intermediate position of an electrical length of the first electrode. Moreover, the electrical length of the first electrode is a half of a wavelength of a communication frequency of the RFIC chip, the first connection terminal of the RFIC chip is connected to the first electrode at a position within one third of the electrical length from an end portion of the first electrode, and the second connection terminal of the RFIC chip is connected to the ground. | 2020-12-03 |
20200380328 | LOW ENERGY TRANSMITTER - A low energy transmitter is provided. The transmitter includes an antenna circuit wherein the antenna circuit has an antenna positive node interface (Vop) and an antenna negative node interface (Von); a reference voltage source that supplies a reference voltage to the antenna circuit; and a common mode feedback (CMFB) circuit coupled to the antenna circuit that receives from the antenna circuit inputs from the Vop and the Von and supplies at least one signal to the antenna circuit. | 2020-12-03 |
20200380329 | RADIO FREQUENCY IDENTIFICATION DEVICE - A radio frequency identification device (RFID) includes an antenna, a first RFID chip and a second RFID chip. The antenna includes a first antenna pattern, a second antenna pattern and a shared emitting part, wherein the first antenna pattern and the second antenna pattern are connected to the shared emitting part respectively. The first RFID chip is electronically connected to the first antenna pattern and is adapted to transmit a first data using the first antenna pattern and the shared emitting part. The second RFID chip is electronically connected to the second antenna pattern and is adapted to transmit a second data using the second antenna pattern and the shared emitting part. | 2020-12-03 |
20200380330 | A MEMORY DEVICE WITH EMBEDDED SIM CARD - Embodiments of the present disclosure are directed towards a memory device removably couplable with a mobile device. In some embodiments, the memory device may include a PCB insertable in the mobile device. The PCB may include a first chip (a micro SD device); a second chip (a SIM card); a first contact electrically coupled with the first chip, to provide a first communication interface between the first chip and the mobile device; and a second contact electrically coupled with the second chip, to provide a second communication interface between the second chip and the mobile device. The first and second communication interfaces may provide for respective communications between the micro SD device and the mobile device, and between the SIM card and the mobile device, at the same time, when the PCB is removably coupled with the mobile device. | 2020-12-03 |
20200380331 | SYSTEMS AND METHODS FOR A RFID ENABLED METAL LICENSE PLATE - In the embodiments described herein, a RFID enabled license plate is constructed by using the license plate, or a retro-reflective layer formed thereon as part of the resonator configured to transmit signals generated by and RFID chip integrated with the license plate. Such an RFID enabled license plate can include a metal license plate with a slot formed in the metal license plate, and a RFID tag module positioned in the slot. The RFID tag module can include a chip and a loop, and the loop can be coupled with the metal license plate, e.g., via inductive or conductive coupling. In this manner, the metal license plate can be configured to act as a resonator providing increased performance. | 2020-12-03 |
20200380332 | ANTENNA DEVICE AND IC CARD HAVING THE SAME - Disclosed herein is an antenna device that includes a substrate, a conductor pattern formed on the substrate, and a magnetic sheet formed on the substrate. The conductor pattern includes a spiral or loop-shaped antenna coil and a spiral or loop-shaped coupling coil connected to the antenna coil and having a diameter smaller than that of the antenna coil. The antenna coil overlaps the magnetic sheet. The magnetic sheet has a first opening at a position overlapping the coupling coil such that an inner diameter area of the coupling coil completely overlaps the first opening in a plan view. | 2020-12-03 |
20200380333 | SYSTEM AND METHOD FOR BODY SCANNING AND AVATAR CREATION - A method and apparatus is disclosed for scanning a body. The system comprises a processor and a range camera capable of capturing at least a first set of depth images of the body rotated to 0 degrees and at least a second set of depth images of the body rotated to x degrees, wherein x is >0 degrees, and x <360 degrees. A first set of computer instructions executable on the processor is capable of calculating a first set of three dimensional points from the first set of depth images and a second set of three dimensional points from the second set of depth images. A second set of computer instructions executable on the processor is capable of rotating and translating the first and second set of three dimensional points into a final set of three dimensional points. A third set of computer instructions executable on the processor is capable of creating a three dimensional mesh from the final set of three dimensional points. | 2020-12-03 |
20200380334 | CONVOLUTIONAL NEURAL NETWORK METHOD AND SYSTEM - A convolutional neural network (CNN) method includes determining a temporary buffer layer, which is located between a first layer and a final layer of a CNN system; performing convolutional operations from the first layer to the determined temporary buffer layer of the CNN system in a first stage to generate a feature map line according to partial input data of layers before the temporary buffer layer; and performing convolutional operations from the temporary buffer layer to the final layer of the CNN system in a second stage to generate a feature map. | 2020-12-03 |
20200380335 | ANOMALY DETECTION IN BUSINESS INTELLIGENCE TIME SERIES - A method of identifying anomalous traffic in a sequence of commercial transaction data includes preprocessing the commercial transaction data into a sequential time series of commercial transaction data, and providing the time series of commercial transaction data to a recurrent neural network. The recurrent neural network evaluates the provided time series of commercial transaction data to generate and output a predicted next element in the time series of commercial transaction data, which is compared with an observed actual next element in the time series of commercial transaction data. The observed next element in the time series of commercial transaction data is determined to be anomalous if it is sufficiently different from the predicted next element in the time series of commercial transaction data. | 2020-12-03 |
20200380336 | Real-Time Predictive Maintenance of Hardware Components Using a Stacked Deep Learning Architecture on Time-Variant Parameters Combined with a Dense Neural Network Supplied with Exogeneous Static Outputs - A system, method, and computer-readable medium are provided for a hardware component failure prediction system that can incorporate a time-series dimension as an input while also addressing issues related to a class imbalance problem associated with failure data. Embodiments utilize a double-stacked long short-term memory (DS-LSTM) deep neural network with a first layer of the DS-LSTM passing hidden cell states learned from a sequence of multi-dimensional parameter time steps to a second layer of the DS-LSTM that is configured to capture a next sequential prediction output. Output from the second layer is combined with a set of categorical variables to an input layer of a fully-connected dense neural network layer. Information generated by the dense neural network provides prediction of whether a hardware component will fail in a given future time interval. | 2020-12-03 |
20200380337 | USER TERMINAL HARDWARE DETECTION SYSTEM AND METHOD - A user terminal hardware detection system includes a computing device and an oscilloscope communicatively coupled to the computing device. The oscilloscope obtains a digital signal waveform diagram of a user terminal and sends the digital signal waveform diagram to the computing device. The computing device performs feature recognition on the digital signal waveform diagram using a fault analysis model to identify feature information of the digital signal waveform diagram. The computing device compares the identified feature information to feature information of a faulty hardware module and a fault type of the faulty hardware module in a fault type database. The computing device outputs the faulty hardware module and the fault type of the faulty hardware module represented by the feature information of the digital signal waveform diagram according to a comparison result. | 2020-12-03 |
20200380338 | INFORMATION PROCESSING SYSTEM, INFERENCE METHOD, ATTACK DETECTION METHOD, INFERENCE EXECUTION PROGRAM AND ATTACK DETECTION PROGRAM - To provide a robust information processing system against attacks by Adversarial Example. A neural network model | 2020-12-03 |
20200380339 | INTEGRATED NEURAL NETWORKS FOR DETERMINING PROTOCOL CONFIGURATIONS - Methods and systems disclosed herein relate generally to systems and methods for integrating neural networks, which are of different types and process different types of data. The different types of data may include static data and dynamic data, and the integrated neural networks can include feedforward and recurrent neural networks. Results of the integrated neural networks can be used to configure or modify protocol configurations. | 2020-12-03 |
20200380340 | Byzantine Tolerant Gradient Descent For Distributed Machine Learning With Adversaries - The present application concerns a computer-implemented method for training a machine learning model in a distributed fashion, using Stochastic Gradient Descent, SGD, wherein the method is performed by a first computer in a distributed computing environment and comprises performing a learning round, comprising broadcasting a parameter vector to a plurality of worker computers in the distributed computing environment, receiving an estimate update vector (gradient) from all or a subset of the worker computers, wherein each received estimate vector is either an estimate of a gradient of a cost function, or an erroneous vector, and determining an updated parameter vector for use in a next learning round based only on a subset of the received estimate vectors. The method aggregates the gradients while guaranteeing resilience to up to half workers being compromised (malfunctioning, erroneous or modified by attackers). | 2020-12-03 |
20200380341 | Fabric Vectors for Deep Learning Acceleration - Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes various attributes of the fabric vector: length, microthreading eligibility, number of data elements to receive, transmit, and/or process in parallel, virtual channel and task identification information, whether to terminate upon receiving a control wavelet, and whether to mark an outgoing wavelet a control wavelet. | 2020-12-03 |
20200380342 | NEURAL NETWORK WIRING DISCOVERY - Neural wirings may be discovered concurrently with training a neural network. Respective weights may be assigned to each edge connecting nodes of a neural graph, wherein the neural graph represents a neural network. A subset of edges may be designated based on the respective weights and data is passed through the neural graph in a forward training pass using the designated subset of edges. A loss function may be determined based on the results of the forward training pass and parameters of the neural network and the respective weights assigned to each edge may be updated in a backwards training pass based on the loss function. The steps of designating the subset of edges, passing data through the neural graph, determining the loss function, and updating parameters of the neural network and the respective weights may be repeated to train the neural network. | 2020-12-03 |
20200380343 | NEUROMORPHIC COMPUTING DEVICE UTILIZING A BIOLOGICAL NEURAL LATTICE - Techniques are disclosed for fabricating and using a neuromorphic computing device including biological neurons. For example, a method for fabricating a neuromorphic computing device includes forming a channel in a first substrate and forming at least one sensor in a second substrate. At least a portion of the channel in the first substrate is seeded with a biological neuron growth material. The second substrate is attached to the first substrate such that the at least one sensor is proximate to the biological neuron growth material and growth of the seeded biological neuron growth material is stimulated to grow a neuron in the at least a portion of the channel. | 2020-12-03 |
20200380344 | NEURON SMEARING FOR ACCELERATED DEEP LEARNING - Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has memory. At least a first single neuron is implemented using resources of a plurality of the array of processing elements. At least a portion of a second neuron is implemented using resources of one or more of the plurality of processing elements. In some usage scenarios, the foregoing neuron implementation enables greater performance by enabling a single neuron to use the computational resources of multiple processing elements and/or computational load balancing across the processing elements while maintaining locality of incoming activations for the processing elements. | 2020-12-03 |
20200380345 | NEURAL NETWORK CHIP, METHOD OF USING NEURAL NETWORK CHIP TO IMPLEMENT DE-CONVOLUTION OPEATION, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM - A neural network chip and a related product are provided. The neural network chip ( | 2020-12-03 |
20200380346 | OPTIMIZATION APPARATUS AND OPTIMIZATION METHOD - An optimization method includes holding combining destination information indicating a combining destination neuron to be combined with a target neuron which is one of a plurality of neurons corresponding to a plurality of spins of an Ising model obtained by converting an optimization problem, the target neuron being different in a plurality of neuron circuits; holding a weighting coefficient indicating a strength of combining between the target neuron and the combining destination neuron, and outputting the weighting coefficient corresponding to the combining destination information; permitting an update of a value of the target neuron by using the weighting coefficient output and the value of the update target neuron, and outputting a determination result indicating whether or not the value of the target neuron is permitted to be updated; and determining the update target neuron based on the plurality of determination results respectively output and outputting the update target information. | 2020-12-03 |
20200380347 | NEURAL NETWORK, METHOD OF CONTROL OF NEURAL NETWORK, AND PROCESSOR OF NEURAL NETWORK - A neural network not requiring massive changes in configuration when changing the number of stages (number of levels) of the neural network. This neural network is provided with at least one neuron core 10 performing an analog multiply-accumulate operation and a weight-value-supply control unit 30 supplying the weight value to the neuron core 10. This neural network is subjected to control processing by a control-processor unit 40 controlling the supply of the weight value from said weight-value-supply control unit 30 in synchronization with the timing of the analog multiply-accumulate operation of the neuron core 10, and processing the data output from the neuron core 10 at every analog multiply-accumulate operation performed by said neuron core as serial data and/or parallel data. | 2020-12-03 |
20200380348 | Noise and Signal Management for RPU Array - Advanced noise and signal management techniques for RPU arrays during ANN training are provided. In one aspect of the invention, a method for ANN training includes: providing an array of RPU devices with pre-normalizers and post-normalizers; computing and pre-normalizing a mean and standard deviation of all elements of an input vector x to the array that belong to the set group of each of the pre-normalizers; and computing and post-normalizing the mean p and the standard deviation a of all elements of an output vector y that belong to the set group of each of the post-normalizers. | 2020-12-03 |
20200380349 | Auto Weight Scaling for RPUs - Techniques for auto weight scaling a bounded weight range of RPU devices with the size of the array during ANN training are provided. In one aspect, a method of ANN training includes: initializing weight values w | 2020-12-03 |
20200380350 | ANALOG NEURAL NETWORK SYSTEMS - The present disclosure relates to a neural network system comprising: a data input configured to receive an input data signal and analog neural network circuitry having an input coupled with the data input. The analog neural network circuitry is operative to apply a weight to a signal received at its input to generate a weighted output signal. The neural network system further comprises compensation circuitry configured to apply a compensating term to the input data signal to compensate for error in the analog neural network circuitry. | 2020-12-03 |
20200380351 | Automated Scaling Of Resources Based On Long Short-Term Memory Recurrent Neural Networks And Attention Mechanisms - Some embodiments provide a non-transitory machine-readable medium that stores a program executable by at least one processing unit of a computing device. The program monitors utilization of a set of resources by a resource consumer operating on the computing device. Based on the monitored utilization of the set of resources, the program further generates a model that includes a plurality of long short-term memory recurrent neural network (LSTM-RNN) layers and a set of attention mechanism layers. The model is configured to predict future utilization of the set of resources. Based on the monitored utilization of the set of resources and the model, the program also determines a set of predicted values representing utilization of the set of resources by the resource consumer operating on the computing device. | 2020-12-03 |
20200380352 | LINEAR MODELING OF QUALITY ASSURANCE VARIABLES - A neural network system for generating a quality assurance alert is provided. A computing device analyzes a quality assurance profile. A computing device arranges data in neurons of, at least, a first layer of a neural network. A computing device generates a threshold level of prediction of quality assurance based, at least in part, on output data from a neural network. A computing device applies output data from a neural network to a regression profile to determine a probability that a quality assurance issue will occur. A computing device generates a message that includes a quality assurance evaluation based, at least, on the determined probability that the quality assurance issue will occur. | 2020-12-03 |
20200380353 | SYSTEM AND METHOD FOR MACHINE LEARNING ARCHITECTURE WITH REWARD METRIC ACROSS TIME SEGMENTS - Systems are methods are provided for training an automated agent. The automated agent maintains a reinforcement learning neural network and generates, according to outputs of the reinforcement learning neural network, signals for communicating resource task requests. First and second task data are received. The task data are processed to compute a first performance metric reflective of performance of the automated agent relative to other entities in a first time interval, and a second performance metric reflective of performance of the automated agent relative to other entities in a second time interval. A reward for the reinforcement learning neural network that reflects a difference between the second performance metric and the first performance metric is computed and provided to the reinforcement learning neural network to train the automated agent. | 2020-12-03 |
20200380354 | DETECTION OF OPERATION TENDENCY BASED ON ANOMALY DETECTION - A computer-implemented method for detecting an operation tendency is disclosed. The method includes preparing a general model for generating a general anomaly score. The method also includes preparing a specific model, for generating a specific anomaly score, trained with a set of a plurality of operation data related to operation by a target operator. The method further includes receiving input operation data. The method includes also calculating a detection score related to the operation tendency by using a general anomaly score and a specific anomaly score generated for the input operation data. Further the method includes outputting a result based on the detection score. | 2020-12-03 |
20200380355 | CLASSIFICATION APPARATUS AND METHOD FOR OPTIMIZING THROUGHPUT OF CLASSIFICATION MODELS - A classification apparatus is configured to perform a classification using a neural network with at least one hidden layer and an output layer, wherein the classification apparatus comprises a coarse training unit configured to train the neural network on a subset of neurons of a last hidden layer and a set of neurons of the output layer; and a fine training unit configured to train the neural network on a set of the last hidden layer and a subset of neurons of the output layer. By executing the classification apparatus, training of a classification model can be improved by reducing the computational burden of the classification, speeding up the training time of a classification model, and speeding up the inference time during application of the classification model. | 2020-12-03 |
20200380356 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - There is provided an information processing apparatus, an information processing method, and a program for enabling quantization with high accuracy. Quantization is performed assuming that a distribution of values calculated by a machine learning operation is based on a predetermined probability distribution. The operation is an operation in deep learning, and the quantization is performed on the basis of a notion that a distribution of gradients calculated by the operation based on the deep learning is based on the predetermined probability distribution. The quantization is performed when a value obtained by learning in one apparatus is supplied to another apparatus in distributed learning in which machine learning is performed by a plurality of apparatuses in a distributed manner. The present technology can be applied to an apparatus that performs machine learning such as deep learning in a distributed manner. | 2020-12-03 |
20200380357 | INCREMENTAL NETWORK QUANTIZATION - Methods and apparatus relating to techniques for incremental network quantization. In an example, an apparatus comprises logic, at least partially comprising hardware logic to partition a plurality of model weights in a deep neural network (DNN) model into a first group of weights and a second group of weights, convert each weight in the first group of weights to a power of two, and repeatedly retrain the DNN model while converting a subset of weights in the second group to a power of two or zero. Other embodiments are also disclosed and claimed. | 2020-12-03 |
20200380358 | APPARATUS FOR DEEP REPRESENTATION LEARNING AND METHOD THEREOF - An apparatus for providing similar contents, using a neural network, includes a memory storing instructions, and a processor configured to execute the instructions to obtain a plurality of similarity values between a user query and a plurality of images, using a similarity neural network, obtain a rank of each the obtained plurality of similarity values, and provide, as a most similar image to the user query, at least one among the plurality of images that has a respective one among the plurality of similarity values that corresponds to a highest rank among the obtained rank of each of the plurality of similarity values. The similarity neural network is trained with a divergence neural network for outputting a divergence between a first distribution of first similarity values for positive pairs, among the plurality of similarity values, and a second distribution of second similarity values for negative pairs, among the plurality of similarity values. | 2020-12-03 |
20200380359 | DEVICE AND METHOD OF DIGITAL IMAGE CONTENT RECOGNITION, TRAINING OF THE SAME - A device for and computer implemented method of image content recognition and of training a neural network for image content recognition. The method comprising collecting a first set of digital images from a database, the first set of digital images is sampled from digital images assigned to a many shot class; creating a first training set comprising the collected first set of digital images; training a first artificial neural network comprising a first feature extractor and a first classifier for classifying digital images using the first training set; collecting first parameters of the trained first feature extractor, collecting second parameters of the trained classifier, determining third parameters of a second feature extractor of a second artificial neural network depending on the first parameters, determining fourth parameters of a second classifier for classifying digital images of the second artificial neural network. | 2020-12-03 |
20200380360 | METHOD AND APPARATUS WITH NEURAL NETWORK PARAMETER QUANTIZATION - A processor-implemented method includes determining a first quantization value by performing log quantization on a parameter from one of input activation values and weight values in a layer of a neural network, comparing a threshold value with an error between a first dequantization value obtained by dequantization of the first quantization value and the parameter, determining a second quantization value by performing log quantization on the error in response to the error being greater than the threshold value as a result of the comparing; and quantizing the parameter to a value in which the first quantization value and the second quantization value are grouped. | 2020-12-03 |
20200380361 | DIRECTED AND INTERCONNECTED GRID DATAFLOW ARCHITECTURE - A remote artificial intelligence (AI) acceleration system is provided. The system includes a plurality of application servers, wherein each of the plurality of application server is configured to execute AI applications over an AI software framework; at least one artificial intelligence accelerator (AIA) appliance server configured to execute AI processing tasks in response to requests from the AI applications; and at least one switch configured to allow connectivity between the plurality of application servers and the at least one AIA appliance server, wherein the plurality of network attached artificial intelligence accelerator (NA-AIA) engines connected to the AIA switch, wherein each of the plurality of NA-AIA engines connected is configured to execute at least one processing AI task. | 2020-12-03 |
20200380362 | METHODS FOR TRAINING MACHINE LEARNING MODEL FOR COMPUTATION LITHOGRAPHY - Methods of training machine learning models related to a patterning process, including a method for training a machine learning model configured to predict a mask pattern. The method including obtaining (i) a process model of a patterning process configured to predict a pattern on a substrate, wherein the process model comprises one or more trained machine learning models, and (ii) a target pattern, and training the machine learning model configured to predict a mask pattern based on the process model and a cost function that determines a difference between the predicted pattern and the target pattern. | 2020-12-03 |
20200380363 | METHOD AND DEVICE FOR CONTROLLING DATA INPUT AND OUTPUT OF FULLY CONNECTED NETWORK - The disclosure relates to an artificial intelligence (AI) system that simulates functions such as cognition and judgment of the human brain by utilizing machine learning algorithms such as deep learning and its applications. In particular, the disclosure provides a method of controlling data input and output of a fully connected network according to an artificial intelligence system and its applications, the method including receiving, from a learning circuit, an edge sequence representing a connection relationship between nodes included in a current layer of the fully connected network, generating a compressed edge sequence that compresses consecutive invalid bits among bit strings constituting the edge sequence into one bit and a validity determination sequence determining valid and invalid bits among the bit strings constituting the compressed edge sequence, writing the compressed edge sequence and the validity determination sequence to the memory, and sequentially reading the compressed edge sequences from the memory based on the validity determination sequence such that the valid bits are sequentially output to the learning circuit. | 2020-12-03 |
20200380364 | Adversarial Probabilistic Regularization - A method of training a supervised neural network to solve an optimization problem that involves minimizing an error function ƒ(θ) where θ is a vector of independent and identically distributed (i.i.d.) samples of a target distribution £ | 2020-12-03 |
20200380365 | LEARNING APPARATUS, METHOD, AND PROGRAM - There is provided a learning apparatus, a method, and a program that can prevent overlearning and improve generalization performance while suppressing deterioration of convergence performance in learning. A learning apparatus includes a learning unit that performs learning of a neural network composed of a plurality of layers and including a plurality of skip connections in which an output from a first layer to a second layer which is a layer next to the first layer is branched to skip the second layer and is connected to an input of a third layer located downstream of the second layer, a connection invalidating unit that invalidates at least one of the skip connections in a case where the learning is performed, and a learning control unit that changes the skip connection to be invalidated by the connection invalidating unit and causes the learning unit to perform the learning. | 2020-12-03 |
20200380366 | ENHANCED GENERATIVE ADVERSARIAL NETWORK AND TARGET SAMPLE RECOGNITION METHOD - The present disclosure relates to an enhanced generative adversarial network and a target sample recognition method. The enhanced generative adversarial network in the present disclosure includes at least one enhanced generator and at least one enhanced discriminator, where the enhanced generator obtains generated data by processing initial data, and provides the generated data to the enhanced discriminator; the enhanced discriminator processes the generated data and feeds back a classification result to the enhanced generator; the enhanced discriminator includes: a convolution layer, a basic capsule layer, a convolution capsule layer, and a classification capsule layer, and the convolution layer, the basic capsule layer, the convolution capsule layer, and the classification capsule layer are sequentially connected to each other. | 2020-12-03 |
20200380367 | DEEP LEARNING MODEL INSIGHTS USING PROVENANCE DATA - A method, computer system, and a computer program product for generating deep learning model insights using provenance data is provided. Embodiments of the present invention may include collecting provenance data. Embodiments of the present invention may include generating model insights based on the collected provenance data. Embodiments of the present invention may include generating a training model based on the generated model insights. Embodiments of the present invention may include reducing the training model size. Embodiments of the present invention may include creating a final trained model. | 2020-12-03 |
20200380368 | DATA MODELLING SYSTEM, METHOD AND APPARATUS - In a method of modelling data, using a neural network, the neural network is trained using data comprising a plurality of input variables and a plurality of output variables, wherein the method comprises constraining the neural network so that a monotonic relationship exists between one or more selected input variables and one or more related output variables. | 2020-12-03 |
20200380369 | TRAINING A NEURAL NETWORK USING SELECTIVE WEIGHT UPDATES - Training one or more neural networks using selective updates to weight information of the one or more neural networks. In at least one embodiment, one or more neural networks are trained by at least updating one or more portions of weight information of the one or more neural networks based, at least in part, on metadata that indicate how recently the one or more portions of weight information has been updated. | 2020-12-03 |
20200380370 | FLOATING-POINT UNIT STOCHASTIC ROUNDING FOR ACCELERATED DEEP LEARNING - Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Each compute element has a respective floating-point unit enabled to perform stochastic rounding, thus in some circumstances enabling reducing systematic bias in long dependency chains of floating-point computations. The long dependency chains of floating-point computations are performed, e.g., to train a neural network or to perform inference with respect to a trained neural network. | 2020-12-03 |
20200380371 | FLEXIBLE, LIGHTWEIGHT QUANTIZED DEEP NEURAL NETWORKS - To improve the throughput and energy efficiency of Deep Neural Networks (DNNs) on customized hardware, lightweight neural networks constrain the weights of DNNs to be a limited combination of powers of 2. In such networks, the multiply-accumulate operation can be replaced with a single shift operation, or two shifts and an add operation. To provide even more design flexibility, the k for each convolutional filter can be optimally chosen instead of being fixed for every filter. The present invention formulates the selection of k to be differentiable and describes model training for determining k-based weights on a per-filter basis. The present invention can achieve higher speeds as compared to lightweight NNs with only minimal accuracy degradation, while also achieving higher computational energy efficiency for ASIC implementation. | 2020-12-03 |
20200380372 | MULTI-TASK NEURAL NETWORKS WITH TASK-SPECIFIC PATHS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using multi-task neural networks. One of the methods includes receiving a first network input and data identifying a first machine learning task to be performed on the first network input; selecting a path through the plurality of layers in a super neural network that is specific to the first machine learning task, the path specifying, for each of the layers, a proper subset of the modular neural networks in the layer that are designated as active when performing the first machine learning task; and causing the super neural network to process the first network input using (i) for each layer, the modular neural networks in the layer that are designated as active by the selected path and (ii) the set of one or more output layers corresponding to the identified first machine learning task. | 2020-12-03 |
20200380373 | METHOD AND DEVICE FOR GENERATING TRAINING DATA AND COMPUTER PROGRAM STORED IN COMPUTER-READABLE RECORDING MEDIUM - An exemplary embodiment of the present disclosure for implementing the foregoing object discloses a method of generating defect data of a target domain by using defect data of a source domain. The method of generating defect data of a target domain by using defect data of a source domain includes: inputting defect data of a source domain, to which a first mask is applied, and defect data of the source domain, to which the first mask is not applied, to a reconstruction algorithm; first training the reconstruction algorithm so as to generate defect data of the source domain; to which the first mask is reconstructed; inputting normal data of the source domain, to which a second mask is applied, and normal data of the source domain, to which the second mask is not applied, to the reconstruction algorithm; second training the reconstruction algorithm so as to generate normal data of the source domain, to which the second mask is reconstructed; inputting normal data of a target domain, to which the second mask is applied, and normal data of the target domain, to which the second mask is not applied, to the reconstruction algorithm; and third training the reconstruction algorithm so as to generate normal data of the target domain, to which the second mask is reconstructed. | 2020-12-03 |
20200380374 | MUTABLE PARAMETERS FOR MACHINE LEARNING MODELS DURING RUNTIME - The subject technology receives code corresponding to a neural network (NN) model and a set of weights for the NN model. The subject technology determines a set of layers that are mutable in the NN model. The subject technology determines information for mapping a second set of weights to the set of weights for the NN model. The subject technology generates metadata corresponding to the set of layers that are mutable, and the information for mapping the second set of weights to the set of weights for the NN model, wherein the generated metadata enables updating the set of layers that are mutable during execution of the NN model. | 2020-12-03 |
20200380375 | DECOMPOSITION OF MACHINE LEARNING OPERATIONS - The subject technology receives a representation of a neural network (NN) model to be executed on an electronic device, the representation of the NN model including nodes corresponding to intermediate layers of the NN model. The subject technology determines, for the respective operation corresponding to each node in each respective intermediate layer of the NN model, a respective set of operations that are mathematically equivalent to the respective operation such that an aggregation of outputs of the respective set of operations is equivalent to an output of the respective operation. The subject technology generates a graph based on each respective set of operations, wherein the graph includes a set of branches, each branch includes a plurality of operations. The subject technology determines a respective order for executing each branch of the graph. | 2020-12-03 |
20200380376 | Artificial Intelligence Based System And Method For Predicting And Preventing Illicit Behavior - An artificial intelligence based system and method for predicting and preventing illicit behavior is disclosed. The system and method may include obtaining search strings used by multiple users, as well as the clickstream data of such users. The search terms included in the search strings may be preprocessed and analyzed for inclusion of suspicious words, e.g., words related to illicit behavior, provided in a corpus of suspicious words. Information associated with the search strings containing suspicious words may be analyzed to identify users associated with the same search strings. The clickstream of the identified users may be analyzed to determine whether the users are likely to engage in illicit behavior. Preventive measures may be taken to prevent such users from engaging in such illicit behavior. | 2020-12-03 |
20200380377 | AUTOMATED RESOLUTION OF OVER AND UNDER-SPECIFICATION IN A KNOWLEDGE GRAPH - Systems and methods for automated resolution of over-specification and under-specification in a knowledge graph are disclosed. In embodiments, a method includes: determining, by a computing device, that a size of an object cluster of a knowledge graph meets a threshold value indicating under-specification of a knowledge base of the knowledge graph; determining, by the computing device, sub-classes for objects of the knowledge graph; re-initializing, by the computing device, the knowledge graph based on the sub-classes to generate a refined knowledge graph, wherein the size of the object cluster is reduced in the refined knowledge graph; and generating, by the computing device, an output based on information determined from the refined knowledge graph. | 2020-12-03 |
20200380378 | Using Metamodeling for Fast and Accurate Hyperparameter optimization of Machine Learning and Deep Learning Models - Herein are techniques that train regressor(s) to predict how effective would a machine learning model (MLM) be if rained with new hyperparameters and/or dataset. In an embodiment, for each training dataset, a computer derives, from the dataset, values for dataset metafeatures. The computer performs, for each hyperparameters configuration (HC) of a MLM, including landmark HCs: configuring the MLM based on the HC, training the MLM based on the dataset, and obtaining an empirical quality score that indicates how effective was said training the MLM when configured with the HC. A performance tuple is generated that contains: the HC, the values for the dataset metafeatures, the empirical quality score and, for each landmark configuration, the empirical quality score of the landmark configuration and/or the landmark configuration itself. Based on the performance tuples, a regressor is trained to predict an estimated quality score based on a given dataset and a given HC. | 2020-12-03 |
20200380379 | Data Quality Tool - An apparatus includes a database and a processor. The database stores a set of columns and rules assigned to each column. The rules are used to assess the quality of the data stored in the columns. The processor determines, based in part on the set of rules, the set of columns, and metadata and statistical properties of the columns, a machine learning policy adapted to generate a set of candidate rules for a given column. The processor further determines those columns of the set of columns that are similar to a subject column based on the names of the columns and the names of the tables storing the columns. The processor applies the machine learning policy to the subject column of data, rules of the similar columns, and metadata and statistical properties of the subject column to determine a set of candidate rules for the subject column. | 2020-12-03 |
20200380380 | USING UNSUPERVISED MACHINE LEARNING TO PRODUCE INTERPRETABLE ROUTING RULES - Embodiments of the disclosure relate to systems and methods for leveraging unsupervised machine learning to produce interpretable routing rules. In various embodiments, a training dataset comprising a plurality of data records is created. The plurality of data records includes message data comprising a plurality of messages and action data comprising a plurality of actions that correspond to the plurality of messages. A first machine learning model is trained using the training dataset. The first machine learning model as trained provides cluster data that indicates, for each data record of the plurality of data records of the training dataset, membership in a cluster of a plurality of clusters. An enhanced training dataset is created that comprises the message data from the training dataset, the action data from the training dataset, and the cluster data. A set of second machine learning models is trained using the enhanced training dataset, each respective second machine learning model of the set of second machine learning models providing a decision tree of a plurality of decision trees and corresponding to a distinct cluster of the plurality of clusters. Rules can be extracted from each decision tree of the plurality of decision trees and used as a basis for creating and transmitting alerts based on incoming messages. | 2020-12-03 |
20200380381 | PUBLIC POLICY RULE ENHANCEMENT OF MACHINE LEARNING/ARTIFICIAL INTELLIGENCE SOLUTIONS - A method includes creating one or more first policy shims to be applied to a ML/AI module, applying the one or more first policy shims to an input or an output of the ML/AI module and executing the ML/AI module on a data set in response to the applying step. The one or more first policy shims includes an input policy shim and an output policy shim and the applying step includes applying the input policy shim to the data set prior to the executing step and applying the output policy shim to an output of the executing step | 2020-12-03 |
20200380382 | METHOD AND APPARATUS FOR CONTROLLING LEARNING OF MODEL FOR ESTIMATING INTENTION OF INPUT UTTERANCE - A method and apparatus for controlling learning of a model for estimating an intention of an input utterance is disclosed. A method of controlling learning of a model for estimating an intention of an input utterance among a plurality of intentions includes providing a first index corresponding to the number of registered utterances for each intention, providing a second index corresponding to a learning level for each intention, providing a learning target setting interface such that at least one intention that is to be a learning target is selected from among the intentions based on the first index and the second index, and training the model based on the registered utterances for each intention and setting of the learning target for each intention. | 2020-12-03 |
20200380383 | SAFETY MONITOR FOR IMAGE MISCLASSIFICATION - Systems, apparatuses, and methods for implementing a safety monitor framework for a safety-critical inference application are disclosed. A system includes a safety-critical inference application, a safety monitor, and an inference accelerator engine. The safety monitor receives an input image, test data, and a neural network specification from the safety-critical inference application. The safety monitor generates a modified image by adding additional objects outside of the input image. The safety monitor provides the modified image and neural network specification to the inference accelerator engine which processes the modified image and provides outputs to the safety monitor. The safety monitor determines the likelihood of erroneous processing of the original input image by comparing the outputs for the additional objects with a known good result. The safety monitor complements the overall fault coverage of the inference accelerator engine and covers faults only observable at the network level. | 2020-12-03 |
20200380384 | DEVICE FOR HYPER-DIMENSIONAL COMPUTING TASKS - A system for hyper-dimensional computing for inference tasks may be provided. The device comprises an item memory for storing hyper-dimensional item vectors, a query transformation unit connected to the item memory, the query transformation unit being adapted for forming a hyper-dimensional query vector from a query input and hyper-dimensional base vectors stored in the item memory, and an associative memory adapted for storing a plurality of hyper-dimensional profile vectors and for determining a distance between the hyper-dimensional query vector and the plurality of hyper-dimensional profile vectors, wherein the item memory and the associative memory are adapted for in-memory computing using memristive devices. | 2020-12-03 |
20200380385 | INTEGRATION OF KNOWLEDGE GRAPH EMBEDDING INTO TOPIC MODELING WITH HIERARCHICAL DIRICHLET PROCESS - Leveraging domain knowledge is an effective strategy for enhancing the quality of inferred low-dimensional representations of documents by topic models. Presented herein are embodiments of a Bayesian nonparametric model that employ knowledge graph (KG) embedding in the context of topic modeling for extracting more coherent topics; embodiments of the model may be referred to as topic modeling with knowledge graph embedding (TMKGE). TMKGE embodiments are hierarchical Dirichlet process (HDP)-based models that flexibly borrow information from a KG to improve the interpretability of topics. Also, embodiments of a new, efficient online variational inference method based on a stick-breaking construction of HDP were developed for TMKGE models, making TMKGE suitable for large document corpora and KGs. Experiments on datasets illustrate the superior performance of TMKGE in terms of topic coherence and document classification accuracy, compared to state-of-the-art topic modeling methods. | 2020-12-03 |
20200380386 | USE MACHINE LEARNING TO VERIFY AND MODIFY A SPECIFICATION OF AN INTEGRATION INTERFACE FOR A SOFTWARE MODULE - In some examples, a server may determine a specification associated with a software module that is to be integrated with a software system. The specification identifies how the software module interacts with the software system. The server may execute a machine learning module to perform an analysis of the specification. The machine learning module may suggest at least one modification to at least a first portion of the specification and may automatically modify at least a second portion of the specification. The server may convert the specification to one or more application programming interface (API) calls and provide a system interface that includes the one or more API calls to enable the software module to interact with the software system. The API calls may include calls to a data integration API, a file transfer API, a messaging API, a database API, or any combination thereof. | 2020-12-03 |
20200380387 | ANALYSIS SYSTEM WITH MACHINE LEARNING BASED INTERPRETATION - One embodiment of the present disclosure is a system for predicting performance of building equipment. The system comprises one or more sensors in communication with the building equipment, and the sensors are operable to detect characteristics from the building equipment. The system further comprises a computing device in communication with the sensors and in the same geographic location as the sensors. The computing device comprises one or more memory devices configured to store instructions that, when executed on one or more processors, cause the one or more processors to receive data from the sensors, the data based on the detected characteristics. The one or more processors also generate, based on a machine learning model and the data, a predicted performance of the building equipment when the machine learning model comprises a prior data substantially similar to the data. | 2020-12-03 |
20200380388 | PREDICTIVE MAINTENANCE SYSTEM FOR EQUIPMENT WITH SPARSE SENSOR MEASUREMENTS - Example implementations described herein are directed to constructing prediction models and conducting predictive maintenance for systems that provide sparse sensor data. Even if only sparse measurements of sensor data are available, example implementations utilize the inference of statistics with functional deep networks to model prediction for the systems, which provides better accuracy and failure prediction even if only sparse measurements are available. | 2020-12-03 |
20200380389 | SENTIMENT AND INTENT ANALYSIS FOR CUSTOMIZING SUGGESTIONS USING USER-SPECIFIC INFORMATION - Systems and processes for operating an intelligent automated assistant to provide customized suggestions based on user-specific information are provided. An example method includes obtaining impressions and performing, based on the impressions, at least one of: analyzing sentiment of at least a portion of the impressions; and predicting user intent based on at least a portion of the impressions. The method further includes determining a plurality of concepts based on the obtained impressions; and weighing the plurality of concepts based on context associated with obtaining the impressions and based on at least one of a sentiment analysis result or a predicted user intent. The method further includes generating, based on the one or more weighted concepts, a representation of a collection of user-specific information; and facilitating to provide one or more suggestions to the user based on the representation of the collection of user-specific information. | 2020-12-03 |
20200380390 | SYSTEM AND METHOD FOR ACCELERATED COMPUTATION OF SUBSURFACE REPRESENTATIONS - A computational stratigraphy model may be run for M mini-steps to simulate changes in a subsurface representation across M mini-steps (from 0-th subsurface representation to M-th subsurface representation), with a mini-step corresponding to a mini-time duration. The subsurface representation after individual steps may be characterized by a set of computational stratigraphy model variables. Some or all of the computational stratigraphy model variables from running of the computational stratigraphy model may be provided as input to a machine learning model. The machine learning model may predict changes to the subsurface representation over a step corresponding to a time duration longer than the mini-time duration and output a predicted subsurface representation. The subsurface representation may be updated based on the predicted subsurface representation outputted by the machine learning model. Running of the computational stratigraphy model and usage of the machine learning model may be iterated until the end of the simulation. | 2020-12-03 |
20200380391 | METHODS AND SYSTEMS FOR PREDICTING ELECTROMECHANICAL DEVICE FAILURE - Methods and systems for predicting electromechanical device failure are disclosed. In an example method, an analytic model, configured to implement predictive diagnostics for an electromechanical device, may be provided. Sensor data may be received from the electromechanical device, which may comprise a plurality of time series for a sensor-measurable parameter associated with operation of the electromechanical device. One or more machine learning processes may be used to update the analytic model. The one or more machine learning processes may comprise determining one or more data anomalies in the plurality of time series. The updated analytic method may be deployed to implement updated predictive diagnostics for the electromechanical device. | 2020-12-03 |
20200380392 | DATA ANALYSIS APPARATUS, DATA ANALYSIS METHOD, AND DATA ANALYSIS PROGRAM - A data analysis apparatus executes: a selection process selecting a first feature variable group that is a trivial feature variable group contributing to prediction and a second feature data group other than the first feature variable group from a set of feature variables; an operation process operating a first regularization coefficient related to a first weight parameter group corresponding to the first feature variable group in a manner that the loss function is larger, and operating a second regularization coefficient related to a second weight parameter group corresponding to the second feature variable group in a manner that the loss function is smaller, among a set of weight parameters configuring a prediction model, in a loss function related to a difference between a prediction result output in a case of inputting the set of feature variables to the prediction model and ground truth data corresponding to the feature variables. | 2020-12-03 |
20200380393 | PREDICTION SYSTEM, PREDICTION METHOD, AND PROGRAM - According to the present invention, a selection unit receives selection of at least one explanatory variable from among a plurality of candidate explanatory variables regarding the operation of a factory. A value input unit receives an input of a value regarding the selected explanatory variable. An energy demand identification unit identifies the value of a target variable regarding the operation of the factory on the basis of the inputted value. An output unit outputs the identified value of the target variable. | 2020-12-03 |
20200380394 | CONTEXTUAL HASHTAG GENERATOR - A contextual hashtag generation method, system, and computer program product include receiving content from an online source, identifying a set of contextual indicators for the content, determining an entity-desired outcome for the content, and generating a hashtag for the content using the set of contextual indicators while maximizing the entity-desired outcome. | 2020-12-03 |
20200380395 | MACHINE LEARNING AND VALIDATION OF ACCOUNT NAMES, ADDRESSES, AND/OR IDENTIFIERS - Systems and methods are disclosed for determining if an account identifier is computer-generated. One method includes receiving the account identifier, dividing the account identifier into a plurality of fragments, and determining one or more features of at least one of the fragments. The method further includes determining the commonness of at least one of the fragments, and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments. | 2020-12-03 |
20200380396 | SYSTEMS AND METHODS FOR MODELING NOISE SEQUENCES AND CALIBRATING QUANTUM PROCESSORS - Calibration techniques for devices of analog processors to remove time-dependent biases are described. Devices in an analog processor exhibit a noise spectrum that spans a wide range of frequencies, characterized by 1/f spectrum. Offset parameters are determined assuming only a given power spectral density. The algorithm determines a model for a measurable quantity of a device in an analog processor associated with a noise process and an offset parameter, determines the form of the spectral density of the noise process, approximates the noise spectrum by a discrete distribution via the digital processor, constructs a probability distribution of the noise process based on the discrete distribution and evaluates the probability distribution to determine optimized parameter settings to enhance computational efficiency. | 2020-12-03 |
20200380397 | METHOD FOR ANALYZING A SIMULATION OF THE EXECUTION OF A QUANTUM CIRCUIT - A method for analyzing a simulation of the execution of a quantum circuit comprises:
| 2020-12-03 |