38th week of 2020 patent applcation highlights part 53 |
Patent application number | Title | Published |
20200293830 | ARTICLE DAMAGE DETECTION - The present specification provides methods, apparatuses, and devices for detecting damages to an article. In one aspect, the method includes: obtaining at least two images that are time sequentially related and show the article at different angles; providing the at least two images as input to a detection model in time order, wherein the detection model comprises a first sub-model and a second sub-model that have been jointly trained on training samples associated with labels indicating respective article damage degrees; processing the at least two images using the first sub-model to determine a feature processing result based on respective features identified from each image; processing the feature processing result using the second sub-model to perform time series analysis on the feature processing result to determine a damage detection result; and obtaining, as output from the detection model, the damage detection result. | 2020-09-17 |
20200293831 | COMPONENT DISCRIMINATION APPARATUS AND METHOD FOR DISCRIMINATING COMPONENT - A component discrimination apparatus includes: an image acquisition unit configured to acquire image-pickup data obtained by taking an image of a component using a camera; a dividing unit configured to divide the image-pickup data into first image-pickup data and second image-pickup data so that they respectively include at least one characteristic part of the component; a specification identifying unit configured to identify a specification of the characteristic part in the first image-pickup data using the first image-pickup data, thereby identifying a type of the component; and a specification correctness determination unit configured to perform, using the second image-pickup data, correctness determination of whether or not a specification of the characteristic part in the second image-pickup data is the same as a specification of the component of the type identified by the specification identifying unit. | 2020-09-17 |
20200293832 | LOW POWER CONSUMPTION DEEP NEURAL NETWORK FOR SIMULTANEOUS OBJECT DETECTION AND SEMANTIC SEGMENTATION IN IMAGES ON A MOBILE COMPUTING DEVICE - A mobile computing device receives an image from a camera physically located within a vehicle. The mobile computing device inputs the image into a convolutional model that generates a set of object detections and a set of segmented environment blocks in the image. The convolutional model includes subsets of encoding and decoding layers, as well as parameters associated with the layers. The convolutional model relates the image and parameters to the sets of object detections and segmented environment blocks. A server that stores object detections and segmented environment blocks is updated with the sets of object detections and segmented environment blocks detected in the image. | 2020-09-17 |
20200293833 | NEURAL NETWORK MODEL TRAINING METHOD AND DEVICE, AND TIME-LAPSE PHOTOGRAPHY VIDEO GENERATING METHOD AND DEVICE - The present disclosure describes methods, devices, and storage medium for generating a time-lapse photography video with a neural network model. The method includes obtaining a training sample. The training sample includes a training video and an image set. The method includes obtaining through training according to the training sample, a neural network model to satisfy a training ending condition, the neural network model comprising a basic network and an optimization network, by using the image set as an input to the basic network, the basic network being a first generative adversarial network for performing content modeling, generating a basic time-lapse photography video as an output of the basic network, using the basic time-lapse photography video as an input to the optimization network, the optimization network being a second generative adversarial network for performing motion state modeling, and generating an optimized time-lapse photography video as an output of the optimization network. | 2020-09-17 |
20200293834 | Robustness Score for an Opaque Model - A method, system and computer-readable storage medium for performing a cognitive information processing operation. The cognitive information processing operation includes: receiving data from a plurality of data sources; processing the data from the plurality of data sources to provide cognitively processed insights via an augmented intelligence system, the augmented intelligence system executing on a hardware processor of an information processing system, the augmented intelligence system and the information processing system providing a cognitive computing function; performing a robustness assessment operation, the robustness assessment operation assessing robustness of the cognitive computing function, the robustness assessment operation generating a robustness score representing robustness of the cognitive computing function; and, providing the cognitively processed insights to a destination, the destination comprising a cognitive application, the cognitive application enabling a user to interact with the cognitive insights. | 2020-09-17 |
20200293835 | METHOD AND APPARATUS FOR TUNING ADJUSTABLE PARAMETERS IN COMPUTING ENVIRONMENT - Disclosed is a computer implemented method carried on an IT framework and a relative apparatus including: an orchestrator module; an optimizer module; a configurator module; a load generator module; and a telemetry module. The method includes: identifying tunable parameters representing a candidate configuration for the System Under Test (SUT), and applying the candidate configuration to the SUT using the configurator module; performance testing the SUT to determine a performance indicator; supplying performance metrics to the optimizer module's machine learning model to generate an optimized candidate configuration. The model provides as output, in correspondence of a candidate set of parameters, an expected value of the performance indicator and a prediction uncertainty thereof, used by the optimizer module to build an Acquisition Function used to derive a candidate configuration and by the load generator module to build the test workload. The test workload is computed through the machine learning model. | 2020-09-17 |
20200293836 | Fast And Accurate Rule Selection For Interpretable Decision Sets - An IDS generator determines multiple classes for electronic data items. The IDS generator determines, for each class, a class-specific candidate ruleset. The IDS generator performs a differential analysis of each class-specific candidate ruleset. The differential analysis is based on differences between result values of a scoring objective function. In some cases, the differential analysis determines at least one of the differences based on additional data structures, such as an augmented frequent-pattern tree. A probability function based on the differences is compared to a threshold probability At least one testing ruleset is modified based on the comparison. The IDS generator determines, for each class, a class-specific optimized ruleset based on the differential analysis of each class-specific candidate ruleset. The IDS generator creates an optimized interpretable decision set based on combined class-specific optimized rulesets for the multiple classes. | 2020-09-17 |
20200293837 | HETEROGENEOUS DATA FUSION - Various deficiencies in the prior art are addressed by systems, methods, architectures, mechanisms and/or apparatus configured to fuse data received from a plurality of sensor sources on a network. The fusing data includes forming an empirical distribution for each of the sensor sources, reformatting the data from each of the sensor sources into pre-rotational alpha-trimmed depth regions, applying an affine transformation rotation to each of the reformatted data to form post-rotational pre-rotational alpha-trimmed depth regions, and reformatting each affine transformation into a new data fusion operator. | 2020-09-17 |
20200293838 | SCHEDULING COMPUTATION GRAPHS USING NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a schedule for a computation graph. One of the methods includes obtaining data representing an input computation graph; processing the data representing the input computation graph using a graph neural network to generate one or more instance-specific proposal distributions; and generating a schedule for the input computation graph by performing an optimization algorithm in accordance with the one or more instance-specific proposal distributions. | 2020-09-17 |
20200293839 | Burden Score for an Opaque Model - A method, system and computer-readable storage medium for performing a cognitive information processing operation. The cognitive information processing operation includes: receiving data from a plurality of data sources; processing the data from the plurality of data sources to provide cognitively processed insights via an augmented intelligence system, the augmented intelligence system executing on a hardware processor of an information processing system, the augmented intelligence system and the information processing system providing a cognitive computing function; performing an impartiality assessment operation via an impartiality assessment engine, the impartiality assessment operation detecting a presence of bias in an outcome of the cognitive computing function, the impartiality assessment operation generating a burden score representing the presence of bias in the outcome; and, providing the cognitively processed insights to a destination, the destination comprising a cognitive application, the cognitive application enabling a user to interact with the cognitive insights. | 2020-09-17 |
20200293840 | IMAGE FEATURE ACQUISITION - The present application provides an image feature acquisition method and a corresponding apparatus. According to an example of the method, a classification model may be trained by using preset classes of training images, and similar image pairs may be determined based on the training images; classification results from the classification model are tested by using verification images to determine nonsimilar image pairs; and the classification model is optimized based on the similar image pairs and the nonsimilar image pairs. In this way, the optimized classification model may be used to acquire image features. | 2020-09-17 |
20200293841 | MULTI CARD SOCKET FOR MOBILE COMMUNICATION TERMINAL AND MULTI CARD CONNECTOR COMPRISING THE SAME - A card socket for mounting a SIM card and an SD card for a mobile communication terminal includes a housing and a shell. The housing includes a first terminal portion having a first terminal disposed therein, a second terminal portion having a second terminal and a third terminal disposed therein, and a switch portion. The shell includes an upper wall and a sidewall. The shell includes a sensing portion extended from the upper wall to detect insertion of a card tray into the card socket. The sensing portion has a shape which is extended downward from the upper wall in the height direction and then is extended upward in the height direction. The switch portion includes a stopper which is configured to come into contact with an upper surface of the sensing portion and to prevent the sensing portion from being lifted up in the height direction. | 2020-09-17 |
20200293842 | Image Processing Apparatus Converting Target Partial Image Data to Partial Print Data Using First Profile or Second Profile - An image processing apparatus performs a first generation process generating first partial print data by a first color conversion process using a first profile corresponding to a first direction, and a second generation process generating second partial print data using a second color conversion process using a second profile. When a color difference is smaller than a reference, the apparatus sets a printing direction to the first direction, and outputs the first partial print data to a print execution unit for printing the first partial print data while the main scan moves in the first direction. When the color difference is larger than or equal to the reference, the apparatus sets the printing direction to the second direction, and outputs the second partial print data to the print execution unit for printing the second partial print data while the main scan moves in the second direction. | 2020-09-17 |
20200293843 | IMAGE FORMING APPARATUS CAPABLE OF EXECUTING LINE WIDTH ADJUSTMENT PROCESS, METHOD OF CONTROLLING SAME, AND STORAGE MEDIUM - An image forming apparatus that prevents, when reducing variation in thickness of thin lines, the legibility of the thin lines from being adversely affected thereby. When print data is acquired, in a rendering process for printing based on the acquired print data, the rendering process including a line width adjustment process is executed. In a case where the line width adjustment process is executed on the print data, not only the line width adjustment process but also a process for thickening thin lines is executed. | 2020-09-17 |
20200293844 | IMAGE FORMING APPARATUS, METHOD OF CONTROLLING THE SAME, AND STORAGE MEDIUM - The present invention directs to an image forming apparatus comprising: a system control module that controls the image forming apparatus; a first memory device used by the system control module; an image processing module that processes image data to be inputted to the image forming apparatus; a second memory device in which image data processed by the image processing module is stored via an image memory bus; and a memory controller that transfers and writes the image data processed by the image processing module into the first memory device without going through the image memory bus, and issues an end interrupt to the system control module each time image data of a predetermined size has been written. | 2020-09-17 |
20200293845 | PRINTING DEVICE READING INFORMATION FROM AND WRITING INFORMATION TO STORAGE ELEMENT PROVIDED ON TAPE - In a printing device, a supply portion is configured to convey a tape in its longitudinal direction. The tape includes: a plurality of labels arranged continuously in the longitudinal direction; and a plurality of storage elements provided on respective ones of the plurality of labels. A first storage element is provided on a first label and configured to store first authentication data. The second storage element is provided on a second label and configured to store second authentication data. A printing portion is configured to print on the plurality of labels. A controller is configured to perform: reading the first authentication data from the first storage element and the second authentication data from the second storage element by a reading portion; and determining whether the first authentication data is correlated to the second authentication data to meet an authentication condition. | 2020-09-17 |
20200293846 | CODE INCLUDING ADDITIONAL INFORMATION AND METHOD OF GENERATING AND READING THE SAME - Disclosed is a method of generating a code including additional information, including: generating, by an administrator module interworking with a product selling server, a first code including information about a product to be sold by a seller; generating, by the administrator module, a second code including information about a store where the product to be sold by the seller is located and distinguished from the first code; and generating, by the administrator module, a completion code including the product information and the store information of the corresponding product by combining a plurality of codes including the first and second codes, wherein the completion code is configured to have information different from the first and second codes by combining information about a same point of the first and second codes. | 2020-09-17 |
20200293847 | AN APPARATUS - An apparatus comprising: an inductive coupler for coupling inductively with a radio frequency, RF, H-field to provide an alternating RF voltage; a near field, RF, communicator connected to the inductive coupler for performing near field RF communication; an auxiliary circuit connected to the inductive coupler by a rectifier for obtaining DC electrical energy from the alternating RF voltage wherein the auxiliary circuit is arranged to communicate data with the near field RF communicator; wherein the rectifier comprises: a first rectifier input and a second rectifier input for receiving the alternating RF voltage, a first rectifier output and a second rectifier output for providing the DC electrical energy to the auxiliary circuit; a rectifying element connected between the first rectifier input and the second rectifier input wherein the first rectifier output is coupled to an output of the rectifying element and to the first rectifier input by a first inductor. | 2020-09-17 |
20200293848 | Contactlessly Readable Tag, Method For Manufacture Of Contactlessly Readable Tag, Identification Device, And Method For Reading Identifying Information - An objective of the present invention is to provide a contactlessly readable tag, method for manufacture of contactlessly readable tag, identification device, and method for reading identifying information, capable of effecting an increased capacity in recorded information and improved precision in reading said recorded information. Provided is a contactlessly readable tag, comprising a metal part and an electromagnetic wave absorption body. The manner in which the metal part and the electromagnetic wave absorption body are installed is associated with identifying information. When the tag is irradiated with electromagnetic waves, it is possible to identify the identifying information on the basis of the amplitude of the electromagnetic waves reflected by the tag, and the shift in either the frequency or the phase of said reflected electromagnetic waves. | 2020-09-17 |
20200293849 | PACKING MATERIAL, METHOD FOR PRODUCING PACKING MATERIAL, READING DEVICE, STORED-ARTICLE MANAGEMENT SYSTEM, DISCONNECTION DETECTION DEVICE, UNSEALING DETECTION LABEL, AND UNSEALING DETECTION SYSTEM - A package in an aspect of the present invention includes: a package body having a receiving cavity for receiving a cavity item; a sheet for sealing the receiving cavity; a conducting wire formed on the sheet so as to pass above the sealed opening portion of the receiving cavity; and a wireless communication device formed on the sheet so as to be connected to the conducting wire. The wireless communication device transmits a signal including information which differs between before and after the conducting wire together with the sheet is cut as a result of opening the receiving cavity. The information transmitted from the wireless communication device is read by a reader. The package and the reader are used for a cavity item management system. | 2020-09-17 |
20200293850 | METAL FASTENER WITH EMBEDDED RFID TAG AND METHOD OF PRODUCTION - The present disclosure is generally directed to an RFID tag for use with a metal fastener where the fastener operates as the antenna of the RFID tag. The RFID tag includes a microchip for storing data. The chip is electrically coupled to the metal fastener in order to receive and transmit the RF signal, the metal fastener thereby operating as the antenna for the RFID tag. | 2020-09-17 |
20200293851 | SYSTEM AND METHOD FOR SUPERVISING A PERSON - This invention relates to system and a method for supervising a person in an area, the system comprising a mobile base ( | 2020-09-17 |
20200293852 | ERROR BASED LOCATIONING OF A MOBILE TARGET ON A ROAD NETWORK - Methods, systems, apparatus, and tangible non-transitory carrier media encoded with one or more computer programs that can determine the path or route most likely navigated by a mobile target are described. In accordance with particular embodiments, the most likely path or route is determined based on path-based scoring of position estimates obtained from different types of complementary locationing signal sources. Instead of fusing the position data derived from the different types of signal sources, these particular embodiments determine the most likely path navigated by the mobile target based on an independent aggregation of the position estimates derived from complementary signals of different source types. | 2020-09-17 |
20200293853 | RFID TAG AND RFID TAGGED ARTICLE - An RFID tag includes a first conductor and second conductors that are connected to each other to provide a main portion or all of a coil-shaped conductor or a loop-shaped conductor. Moreover, an RFIC is connected to the second conductors or is electromagnetically coupled to the second conductors. The first conductor includes terminals provided such that an end projects outward from a winding range of the coil-shaped conductor or a loop-shaped conductor while the first conductor is connected to the second conductors. | 2020-09-17 |
20200293854 | MEMORY CHIP CAPABLE OF PERFORMING ARTIFICIAL INTELLIGENCE OPERATION AND OPERATION METHOD THEREOF - A memory chip capable of performing artificial intelligence operation and an operation method thereof are provided. The memory chip includes a memory array and an artificial intelligence engine. The memory array is configured to store input feature data and a plurality of weight data. The input feature data includes a plurality of first subsets, and each of the weight data includes a plurality of second subsets. The artificial intelligence engine includes a plurality of feature detectors. The artificial intelligence engine is configured to access the memory array to obtain the input feature data and the weight data. Each of the feature detectors selects at least one of the second subsets from the corresponding weight data as a selected subset based on a weight index, and the feature detectors perform a neural network operation based on the selected subsets and the corresponding first subsets. | 2020-09-17 |
20200293855 | TRAINING OF ARTIFICIAL NEURAL NETWORKS - Methods and apparatus are provided for training an artificial neural network, having a succession of neuron layers with interposed synaptic layers each storing a respective set of weights {w} for weighting signals propagated between its adjacent neuron layers, via an iterative cycle of signal propagation and weight-update calculation operations. Such a method includes, for at least one of the synaptic layers, providing a plurality P | 2020-09-17 |
20200293856 | IMPLEMENTING RESIDUAL CONNECTION IN A CELLULAR NEURAL NETWORK ARCHITECTURE - A cellular neural network architecture may include a processor and an embedded cellular neural network (CeNN) executable in an artificial intelligence (AI) integrated circuit and configured to perform certain AI functions. The CeNN may include multiple convolution layers, such as first, second, and third layers, each layer having multiple binary weights. In some examples, a method may configure the multiple layers in the CeNN to produce a residual connection. In configuring the second and third layers, the method may use an identity matrix. | 2020-09-17 |
20200293857 | CNN PROCESSING DEVICE, CNN PROCESSING METHOD, AND PROGRAM - A CNN processing device includes: a kernel storage unit configured to store kernels used in a convolution operation; a table storage unit configured to store a Fourier base function used in the convolution operation; and a convolution operation unit configured to model an element g in coefficients G of the kernels in a convolutional neural network (CNN) using N-order (N is an integer equal to or greater than 1) Fourier series expansion and to perform a convolution operation on processing target information that is information on a processing target through a CNN method using the kernels and the Fourier base function. | 2020-09-17 |
20200293858 | METHOD AND APPARATUS FOR PROCESSING COMPUTATION OF ZERO VALUE IN PROCESSING OF LAYERS IN NEURAL NETWORK - A method and an apparatus for processing layers in a neural network fetch Input Feature Map (IFM) tiles of an IFM tensor and kernel tiles of a kernel tensor, perform a convolutional operation on the IFM tiles and the kernel tiles by exploiting IFM sparsity and kernel sparsity, and generate a plurality of OFM tiles corresponding to the IFM tiles. | 2020-09-17 |
20200293859 | SECURE CONVOLUTIONAL NEURAL NETWORKS (CNN) ACCELERATOR - A CNN based-signal processing includes receiving of an encrypted output from a first layer of a multi-layer CNN data. The received encrypted output is subsequently decrypted to form a decrypted input to a second layer of the multi-layer CNN data. A convolution of the decrypted input with a corresponding decrypted weight may generate a second layer output, which may be encrypted and used as an encrypted input to a third layer of the multi-layer CNN data. | 2020-09-17 |
20200293860 | CLASSIFYING INFORMATION USING SPIKING NEURAL NETWORK - A semiconductor device is provided. The semiconductor device may comprise a circuit configured to generate information. The semiconductor device may comprise a monitoring circuit coupled to the circuit. The monitoring circuit may be configured to receive a monitoring signal based upon the information from the circuit. The monitoring circuit may comprise a spiking neural network (SNN) configured to determine, based upon the monitoring signal, a first monitoring classification of a plurality of monitoring classifications associated with the circuit. | 2020-09-17 |
20200293861 | OPERATION APPARATUS - According to an embodiment, an operation apparatus includes a first neural network, a second neural network, an evaluation circuit, and a coefficient-updating circuit. The first neural network performs an operation in a first mode. The second neural network performs an operation in a second mode and has a same layer structure as the first neural network. The evaluation circuit evaluates an error of the operation of the first neural network in the first mode and evaluates an error of the operation of the second neural network in the second mode. The coefficient-updating circuit updates, in the first mode, coefficients set for the second neural network based on an evaluating result of the error of the operation of the first neural network, and updates, in the second mode, coefficients set for the first neural network based on an evaluating result of the error of the operation of the second neural network. | 2020-09-17 |
20200293862 | TRAINING ACTION SELECTION NEURAL NETWORKS USING OFF-POLICY ACTOR CRITIC REINFORCEMENT LEARNING - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network. One of the methods includes maintaining a replay memory that stores trajectories generated as a result of interaction of an agent with an environment; and training an action selection neural network having policy parameters on the trajectories in the replay memory, wherein training the action selection neural network comprises: sampling a trajectory from the replay memory; and adjusting current values of the policy parameters by training the action selection neural network on the trajectory using an off-policy actor critic reinforcement learning technique. | 2020-09-17 |
20200293863 | SYSTEM AND METHOD FOR EFFICIENT UTILIZATION OF MULTIPLIERS IN NEURAL-NETWORK COMPUTATIONS - A system and method for performing neural network calculations may include selecting a size in bits for representing a plurality of weight elements of the neural network based on a value of the weight elements. In each computational cycle: if the size in bits of a weight element of the plurality of weight elements is N, configuring an N*K multiply accumulator to perform one multiply-accumulate operation of a K-bit data element and the N-bit weight element; and if the size in bits of at least two N/M-bit weight elements of the plurality of weight elements is N/M, configuring the N*K multiply accumulator to perform up to N/M multiply-accumulate operations, each of a K-bit, data element and an N/M-bit weight element, where N, K and M are integers bigger than one, N is a power of 2, M is even and N≥M. | 2020-09-17 |
20200293864 | DATA-AWARE LAYER DECOMPOSITION FOR NEURAL NETWORK COMPRESSION - Certain aspects of the present disclosure are directed to methods and apparatus for operating an artificial neural network using data-aware layer decomposition. One exemplary method generally includes receiving a first input signal at a first layer of the artificial neural network; generating a first output signal of the first layer based, at least in part, on a weight matrix of the first layer and the first input signal; decomposing the weight matrix; generating an approximate output signal of the first layer based, at least in part, on the decomposed weight matrix and the first input signal; generating an updated decomposed weight matrix by minimizing a difference between the generated first output signal of the first layer and the approximate output signal of the first layer; and operating the first layer of the artificial neural network using the updated decomposed weight matrix. | 2020-09-17 |
20200293865 | USING IDENTITY LAYER IN A CELLULAR NEURAL NETWORK ARCHITECTURE - A cellular neural network architecture may include a processor and embedded cellular, neural network (CeNN) executable in an artificial intelligence (AI) integrated circuit and configured to perform certain AI functions. The CeNN may include multiple convolution layers, each having multiple binary weights. In some examples, a method may configure a given layer of the CeNN and one or more additional layers of the CeNN to retrieve the output of the given layer for debugging or training the CeNN. In configuring the one or more additional layers, the method may use an identity layer. | 2020-09-17 |
20200293866 | METHODS FOR IMPROVING AI ENGINE MAC UTILIZATION - Embodiments of the invention disclose an integrated circuit and a method for improving utilization of multiply and accumulate (MAC) units on the integrated circuit in an artificial intelligence (AI) engine. In one embodiment, the integrated circuit can include a scheduler for allocating the MAC units to execute a neural network model deployed on the AI engine to process input data. The scheduler includes status information for the MAC units, and can select one or more idle MAC units based on the status information for use to process the feature map slice. The integrated circuit can dynamically map idle MAC units to an input feature map, thereby improving utilization of the MAC units. A pair of linked list, each with a reference head, can be provided in a static random access memory (SRAM) to store only feature map slices and weights for a layer that is currently being processed. When processing a next layer, the two reference heads can be swapped so that output feature map slices for the current layer can be used as input feature maps for the next layer. | 2020-09-17 |
20200293867 | EFFICIENT NEURAL NETWORK ACCELERATOR DATAFLOWS - A distributed deep neural net (DNN) utilizing a distributed, tile-based architecture includes multiple chips, each with a central processing element, a global memory buffer, and a plurality of additional processing elements. Each additional processing element includes a weight buffer, an activation buffer, and vector multiply-accumulate units to combine, in parallel, the weight values and the activation values using stationary data flows. | 2020-09-17 |
20200293868 | METHOD AND APPARATUS TO EFFICIENTLY PROCESS AND EXECUTE ARTIFICIAL INTELLIGENCE OPERATIONS - A method, apparatus, and system are discussed to efficiently process and execute Artificial Intelligence operations. An integrated circuit has a tailored architecture to process and execute Artificial Intelligence operations, including computations for a neural network having weights with a sparse value. The integrated circuit contains at least a scheduler, one or more arithmetic logic units, and one or more random access memories configured to cooperate with each other to process and execute these computations for the neural network having weights with the sparse value. | 2020-09-17 |
20200293869 | NEURAL NETWORK OPERATIONAL METHOD AND APPARATUS, AND RELATED DEVICE - The present disclosure describes methods, devices, and storage mediums for adjusting computing resource. The method includes obtaining an expected pooling time of a target pooling layer and a to-be-processed data volume of the target pooling layer; obtaining a current clock frequency corresponding to at least one computing resource unit used for pooling; determining a target clock frequency according to the expected pooling time of the target pooling layer and the to-be-processed data volume of the target pooling layer; and in response to that the convolution layer associated with the target pooling layer completes convolution and the current clock frequency is different from the target clock frequency, switching the current clock frequency of the at least one computing resource unit to the target clock frequency, and performing pooling in the target pooling layer based on the at least one computing resource unit having the target clock frequency. | 2020-09-17 |
20200293870 | PARTIALLY-FROZEN NEURAL NETWORKS FOR EFFICIENT COMPUTER VISION SYSTEMS - An apparatus to facilitate partially-frozen neural networks for efficient computer vision systems is disclosed. The apparatus includes a frozen core to store fixed weights of a machine learning model, one or more trainable cores coupled to the frozen core, the one or more trainable cores comprising multipliers for trainable weights of the machine learning model, and wherein the alpha blending layer includes a trainable alpha blending parameter, and wherein the trainable alpha blending parameter is a function of a trainable parameter, a sigmoid function, and outputs of frozen and trainable blocks in a preceding layer of the machine learning model. | 2020-09-17 |
20200293871 | Self-Clocking Modulator As Analog Neuron - A self-clocking (or self-oscillating) modulator in signal processing, similar to a ΣΔ modulator, with particular application in the design of neural networks based on such modulators is described. A system of multiple self-clocking modulators and supporting structures may be configured to perform a calculation similar to that of an analog computer, such as a neural network, at lower power and smaller size than a digital implementation. Such a system constructed using the present approach does not require a sequential solution, but rather converges on a solution in one step; unlike the typical prior art, it thus requires no clock and operates asynchronously in a manner similar to a conventional analog computer. The self-clocking modulator can function as a neuron in a neural network, receiving a sum-of-products signal and generating an output stream like that of a ΣΔ modulator that represents this sum-of-products, potentially also including an activation function and offset. | 2020-09-17 |
20200293872 | TIME BORROWING BETWEEN LAYERS OF A THREE DIMENSIONAL CHIP STACK - Some embodiments provide a three-dimensional (3D) circuit structure that has two or more vertically stacked bonded layers with a machine-trained network on at least one bonded layer. As described above, each bonded layer can be an IC die or an IC wafer in some embodiments with different embodiments encompassing different combinations of wafers and dies for the different bonded layers. The machine-trained network in some embodiments includes several stages of machine-trained processing nodes with routing fabric that supplies the outputs of earlier stage nodes to drive the inputs of later stage nodes. In some embodiments, the machine-trained network is a neural network and the processing nodes are neurons of the neural network. In some embodiments, one or more parameters associated with each processing node (e.g., each neuron) is defined through machine-trained processes that define the values of these parameters in order to allow the machine-trained network (e.g., neural network) to perform particular operations (e.g., face recognition, voice recognition, etc.). For example, in some embodiments, the machine-trained parameters are weight values that are used to aggregate (e.g., to sum) several output values of several earlier stage processing nodes to produce an input value for a later stage processing node. | 2020-09-17 |
20200293873 | GENERATING VECTOR REPRESENTATIONS OF DOCUMENTS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating document vector representations. One of the methods includes obtaining a new document; selecting a plurality of new document word sets; and determining a vector representation for the new document using a trained neural network system, wherein the trained neural network system comprises: a document embedding layer and a classifier, and wherein determining the vector representation for the new document using the trained neural network system comprises iteratively providing each of the plurality of new document word sets to the trained neural network system to determine the vector representation for the new document using gradient descent. | 2020-09-17 |
20200293874 | MATCHING BASED INTENT UNDERSTANDING WITH TRANSFER LEARNING - Described herein is a mechanism to identify user intent in requests submitted to a system such as a digital assistant or question-answer systems. Embodiments utilize a match methodology instead of a classification methodology. Features derived from a subgraph retrieved from a knowledge base based on the request are concatenated with pretrained word embeddings for both the request and a candidate predicate. The concatenated inputs for both the request and predicate are encoded using two independent LSTM networks and then a matching score is calculated using a match LSTM network. The result is identified based on the matching scores for a plurality of candidate predicates. The pretrained word embeddings allow for knowledge transfer since pretrained word embeddings in one intent domain can apply to another intent domain without retraining. | 2020-09-17 |
20200293875 | Generative Adversarial Network Based Audio Restoration - Mechanisms are provided for implementing a generative adversarial network (GAN) based restoration system. A first neural network of a generator of the GAN based restoration system is trained to generate an artificial audio spectrogram having a target damage characteristic based on an input audio spectrogram and a target damage vector. An original audio recording spectrogram is input to the trained generator, where the original audio recording spectrogram corresponds to an original audio recording and an input target damage vector. The trained generator processes the original audio recording spectrogram to generate an artificial audio recording spectrogram having a level of damage corresponding to the input target damage vector. A spectrogram inversion module converts the artificial audio recording spectrogram to an artificial audio recording waveform output. | 2020-09-17 |
20200293876 | COMPRESSION OF DEEP NEURAL NETWORKS - In an approach for compressing a neural network, a processor receives a neural network, wherein the neural network has been trained on a set of training data. A processor receives a compression ratio. A processor compresses the neural network based on the compression ratio using an optimization model to solve for sparse weights. A processor re-trains the compressed neural network with the sparse weights. A processor outputs the re-trained neural network. | 2020-09-17 |
20200293877 | INTERACTIVE ASSISTANT - An interactive troubleshooting assistant and method for troubleshooting a system in real time to repair (fix) one or more problems in a system is disclosed. The interactive troubleshooting assistant and method may include receiving multimodal inputs from sensors, wearable devices, a person, etc. that may be input into a feature extractor including attention layers and pre-processing units of a cloud computing system hosted by one or more servers, such as a private cloud system. A pre-processing unit converts the raw multimodal input into a structed form so that an attention layer can give weights to features provided by the pre-processing unit according to their importance. The weighted extracted features may be provided to an actions predictor. The actions predictor generates the most suitable action based on the weighted extracted features generated by the feature extractor based on the multimodal inputs. After the most suitable action is performed, the interactive troubleshooting assistant considers new information from multimodal inputs so that the interactive troubleshooting assistant can provide the next recommended action. The interactive troubleshooting assistant may repeat these operations until the repair is completed. | 2020-09-17 |
20200293878 | HANDLING CATEGORICAL FIELD VALUES IN MACHINE LEARNING APPLICATIONS - Disclosed are systems and methods for handling categorical field values in machine learning applications, and particularly neural networks. Categorical field values are generally transformed into vectors prior to being passed to a neural network. However, low-dimensionality vectors limit the ability of the network to understand correlations between contextually, semantically, or characteristically similar values. High-dimensionality vectors, in contrast, can overwhelm neural networks, causing the network to seek correlations with respect to individual dimensional values, which correlations may be illusory. The present disclosure relates to a hierarchical neural network that includes a main network as well as one or more auxiliary networks. Categorical field values are processed in an auxiliary network, to reduce a dimensionality of the value before being processed by the main network. This enables contextual, semantic, and characteristic correlations to be identified without overwhelming the network as a whole. | 2020-09-17 |
20200293879 | Scalable Extensible Neural Network System and Methods - A neural network system, involving a neural network, the neural network configured to: map sensor output to a Level 1 input; learn to fuse the time slices for one class, learning comprising taking and feeding a random assignment of inputs from each time slice into a threshold function for another two-dimensional array; learn to reject class bias for completing network training; use cycles for class recognition, and fuse segments for intelligent information dominance and a magnetic headwear apparatus operably coupled with the neural network. | 2020-09-17 |
20200293880 | SYMMETRIC PHASE-CHANGE MEMORY DEVICES - Variable resistance devices and neural network processing systems include a first phase change memory device that has a first material that increases resistance when a set pulse is applied. A second phase change memory device has a second material that decreases resistance when a set pulse is applied. | 2020-09-17 |
20200293881 | REINFORCEMENT LEARNING TO TRAIN A CHARACTER USING DISPARATE TARGET ANIMATION DATA - A method for training an animation character, including mapping first animation data defining a first motion sequence to a first subset of bones of a trained character, and mapping second animation data defining a second motion sequence to a second subset of bones. A bone hierarchy includes the first subset of bones and second subset of bones. Reinforcement learning is applied iteratively for training the first subset of bones using the first animation data and for training the second subset of bones using the second animation data. Training of each subset of bones is performed concurrently at each iteration. Training includes adjusting orientations of bones. The first subset of bones is composited with the second subset of bones at each iteration by applying physics parameters of a simulation environment to the adjusted orientations of bones in the first and second subset of bones. | 2020-09-17 |
20200293882 | NEAR-INFRARED SPECTROSCOPY (NIR) BASED GLUCOSE PREDICTION USING DEEP LEARNING - A recurrent neural network that predicts blood glucose level includes a first long short-term memory (LSTM) network and a second LSTM network. The first LSTM network may include an input to receive near-infrared (NIR) radiation data and includes an output. The second LSTM network may include an input to receive the output of the first LSTM network and an output to output blood glucose level data based on the NIR radiation data input to the first LSTM network. | 2020-09-17 |
20200293883 | DISTRIBUTIONAL REINFORCEMENT LEARNING FOR CONTINUOUS CONTROL TASKS - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an action selection neural network that is used to select actions to be performed by a reinforcement learning agent interacting with an environment. In particular, the actions are selected from a continuous action space and the system trains the action selection neural network jointly with a distribution Q network that is used to update the parameters of the action selection neural network. | 2020-09-17 |
20200293884 | IMAGE PROCESSING METHOD AND DEVICE AND TERMINAL - Provided in the embodiments of the present application are an image processing method and device and a terminal. The method includes: determining whether a currently pre-called first convolutional layer is equipped with a first selection module during a process of carrying out convolutional processing on an image by means of a convolutional neural network; if the first convolutional layer is equipped with the first selection module, inputting output data of the previous convolutional layer into the first selection module and the first convolutional layer respectively; calling the first selection module, and using the first selection module to determine a target feature graph from feature graphs contained in the first convolutional layer according to the output data of the previous convolutional layer; and calling the first convolutional layer, and using the first convolutional layer to carry out convolutional processing on the output data of the previous convolutional layer according to the target feature graph, thereby obtaining output data. With the image processing method provided by the embodiments of the present application, the amount of calculation may be reduced, thereby improving the task processing efficiency. | 2020-09-17 |
20200293885 | DATA PROCESSING APPARATUS, DATA PROCESSING METHOD, MEDIUM, AND TRAINED MODEL - There is provided with a data processing apparatus. An acquisition unit acquires feature plane data of a layer included in a neural network. A control unit outputs a first control signal corresponding to the layer for controlling first compression processing and a second control signal corresponding to the layer for controlling second compression processing. A first compression unit performs the first compression processing corresponding to the first control signal on the feature plane data. A second compression unit performs the second compression processing corresponding to the second control signal on the feature plane data after the first compression processing. A type of processing of the second compression processing is different from the first compression processing. | 2020-09-17 |
20200293886 | AUTHENTICATION METHOD AND APPARATUS WITH TRANSFORMATION MODEL - An authentication method and apparatus using a transformation model are disclosed. The authentication method includes generating, at a first apparatus, a first enrolled feature based on a first feature extractor, obtaining a second enrolled feature to which the first enrolled feature is transformed, determining an input feature by extracting a feature from input data with a second feature extractor different from the first feature extractor, and performing an authentication based on the second enrolled feature and the input feature. | 2020-09-17 |
20200293887 | System and Method with Federated Learning Model for Medical Research Applications - Method and system with federated learning model for health care applications are disclosed. The system for federated learning comprises multiple edge devices of end users, one or more federated learner update repository, and one or more cloud. Each edge device comprises a federated learner model, configured to send tensors to federated learner update repository. Cloud comprises a federated learner model, configured to send tensors to federated learner update repository. Federated learner update repository comprises a back-end configuration, configured to send model updates to edge devices and cloud. | 2020-09-17 |
20200293888 | System and Method For Implementing Modular Universal Reparameterization For Deep Multi-Task Learning Across Diverse Domains - A process for training and sharing generic functional modules across multiple diverse (architecture, task) pairs for solving multiple diverse problems is described. The process is based on decomposing the general multi-task learning problem into several fine-grained and equally-sized subproblems, or pseudo-tasks. Training a set of (architecture,task) pairs then corresponds to solving a set of related pseudo-tasks, whose relationships can be exploited by shared functional modules. An efficient search algorithm is introduced for optimizing the mapping between pseudo-tasks and the modules that solve them, while simultaneously training the modules themselves. | 2020-09-17 |
20200293889 | NEURAL NETWORK DEVICE, SIGNAL GENERATION METHOD, AND PROGRAM - A neural network device includes a decimation unit configured to convert a discrete value of an input signal to a discrete value having a smaller step number than a quantization step number of the input signal on the basis of a predetermined threshold value to generate a decimation signal a modulation unit configured to modulate a discrete value of the decimation signal generated by the decimation unit to generate a modulation signal indicating the discrete value of the decimation signal, and a weighting unit including a neuromorphic element configured to output a weighted signal obtained by weighting the modulation signal through multiplication of the modulation signal generated by the modulation unit by a weight according to a value of a variable characteristic. | 2020-09-17 |
20200293890 | ONE-SHOT LEARNING FOR NEURAL NETWORKS - Systems and methods to improve the robustness of a network that has been trained to convergence, particularly with respect to small or imperceptible changes to the input data. Various techniques, which can be utilized either individually or in various combinations, can include adding biases to the input nodes of the network, increasing the minibatch size of the training data, adding special nodes to the network that have activations that do not necessarily change with each data example of the training data, splitting the training data based upon the gradient direction, and making other intentionally adversarial changes to the input of the neural network. In more robust networks, a correct classification is less likely to be disturbed by random or even intentionally adversarial changes in the input values. | 2020-09-17 |
20200293891 | REAL-TIME TARGET DETECTION METHOD DEPLOYED ON PLATFORM WITH LIMITED COMPUTING RESOURCES - Disclosed is a real-time object detection method deployed on a platform with limited computing resources, which belongs to the field of deep learning and image processing. In the present invention, YOLO-v3-tiny neural network is improved, Tinier-YOLO reserves the front five convolutional layers and pooling layers of YOLO-v3-tiny and makes prediction at two different scales. Fire modules in SqueezeNet, 1×1 bottleneck layers, and dense connection are introduced, so that the structure is used to achieve smaller, faster, and more lightweight network that can be run in real time on an embedded AI platform. The model size of Tinier-YOLO in the present invention is only 7.9 MB, which is only ¼ of 34.9 MB of YOLO-v3-tiny, and ⅛ of YOLO-v2-tiny. The reduction in the model size of Tinier-YOLO does not affect real-time performance and accuracy of Tinier-YOLO. Real-time performance of Tinier-YOLO in the present invention is 21.8% higher than that of YOLO-v3-tiny and 70.8% higher than that of YOLO-v2-tiny. Compared with YOLO-v3-tiny, accuracy of Tinier-YOLO is increased by 10.1%. Compared with YOLO-v2-tiny, accuracy of Tinier-YOLO is increased by nearly 18.2%. Tinier-YOLO in the present invention can still achieve real-time detection on a platform with limited resources, and effects are better. | 2020-09-17 |
20200293892 | MODEL TEST METHODS AND APPARATUSES - A sample is obtained from a test sample set. The sample is input into a plurality of models included in a model set that are to be tested, where the plurality of models include at least one neural network model. A plurality of output results are obtained, including obtaining, from each model of the plurality of models, a respective output result. A test result is determined based on the plurality of output results, where the test result includes at least one of a first test result or a second test result, where the first test result includes a plurality of output result accuracies. In response to determining that the test result does not satisfy a predetermined condition, a new sample is generated based on the sample and a predetermined rule, and the new sample is added to the test sample set. | 2020-09-17 |
20200293893 | JOINTLY PRUNING AND QUANTIZING DEEP NEURAL NETWORKS - A system and a method generate a neural network that includes at least one layer having weights and output feature maps that have been jointly pruned and quantized. The weights of the layer are pruned using an analytic threshold function. Each weight remaining after pruning is quantized based on a weighted average of a quantization and dequantization of the weight for all quantization levels to form quantized weights for the layer. Output feature maps of the layer are generated based on the quantized weights of the layer. Each output feature map of the layer is quantized based on a weighted average of a quantization and dequantization of the output feature map for all quantization levels. Parameters of the analytic threshold function, the weighted average of all quantization levels of the weights and the weighted average of each output feature map of the layer are updated using a cost function. | 2020-09-17 |
20200293894 | MULTIPLE-INPUT MULTIPLE-OUTPUT (MIMO) DETECTOR SELECTION USING NEURAL NETWORK - A method and system for multiple-input multiple-output (MIMO) detector selection using a neural network is herein disclosed. According to one embodiment, a method includes generating a labelled dataset of features and detector labels, training a multi-layer perceptron (MLP) network using the generated labelled dataset, and selecting a detector class from a plurality of detector classes based on outputs of the trained MLP network. | 2020-09-17 |
20200293895 | INFORMATION PROCESSING METHOD AND APPARATUS - According to one embodiment, a method of a learning processing of a deep layer neural network having an intermediate layer including a convolution layer, in an information processing using a processor and a memory used for an operation of the processor, includes: acquiring a second value represented by the second number of bits obtained by reducing the first number of bits representing a first value being an input value in units of channel in the intermediate layer of the deep layer neural network; and storing the acquired second value of the second number of bits into the memory. The method further includes performing a back propagation using the second value stored in the memory instead of the first value. | 2020-09-17 |
20200293896 | MULTIPLE-INPUT MULTIPLE-OUTPUT (MIMO) DETECTOR SELECTION USING NEURAL NETWORK - A method and system for training a neural network are herein provided. According to one embodiment, a method includes generating a first labelled dataset corresponding to a first modulation scheme and a second labelled dataset corresponding to a second modulation scheme, determining a first gradient of a cost function between a first neural network layer and a second neural network layer based on back-propagation using the first labelled dataset and the second labelled dataset, and determining a second gradient of the cost function between the second neural network layer and a first set of nodes of a third neural network layer based on back- propagation using the first labelled dataset. The first set of nodes of the third neural network layer correspond to a first plurality of detector classes associated with the first modulation scheme. | 2020-09-17 |
20200293897 | SOFT-TYING NODES OF A NEURAL NETWORK - A machine learning system includes a coach machine learning system that uses machine learning to help a student machine learning system learn its system. By monitoring the student learning system, the coach machine learning system can learn (through machine learning techniques) “hyperparameters” for the student learning system that control the machine learning process for the student learning system. The machine learning coach could also determine structural modifications for the student learning system architecture. The learning coach can also control data flow to the student learning system. | 2020-09-17 |
20200293898 | SYSTEM AND METHOD FOR GENERATING AND OPTIMIZING ARTIFICIAL INTELLIGENCE MODELS - A computer implemented method for generating and optimizing an artificial intelligence model, the method comprising receiving input data and labels, and performing data validation to generate a configuration file, and splitting the data to generate split data for training and evaluation; performing training and evaluation of the split data to determine an error level, and based on the error level, performing an action, wherein the action comprises at least one of modifying the configuration file and tuning the artificial intelligence model automatically; generating the artificial intelligence model based on the training, the evaluation and the tuning; and serving the model for production. | 2020-09-17 |
20200293899 | Using Hierarchical Representations for Neural Network Architecture Searching - A computer-implemented method for automatically determining a neural network architecture represents a neural network architecture as a data structure defining a hierarchical set of directed acyclic graphs in multiple levels. Each graph has an input, an output, and a plurality of nodes between the input and the output. At each level, a corresponding set of the nodes are connected pairwise by directed edges which indicate operations performed on outputs of one node to generate an input to another node. Each level is associated with a corresponding set of operations. At a lowest level, the operations associated with each edge are selected from a set of primitive operations. The method includes repeatedly generating new sample neural network architectures, and evaluating their fitness. The modification is performed by selecting a level, selecting two nodes at that level, and modifying, removing or adding an edge between those nodes according to operations associated with lower levels of the hierarchy. | 2020-09-17 |
20200293900 | AUTOMATED DETECTION OF CODE REGRESSIONS FROM TIME-SERIES DATA - In non-limiting examples of the present disclosure, systems, methods and devices for detecting and classifying service issues associated with a cloud-based service are presented. Operational event data for a plurality of operations associated with the cloud-based application service may be monitored. A statistical-based unsupervised machine learning model may be applied to the operational event data. A subset of the operational event data may be tagged as potentially being associated with a code regression, wherein the subset comprises a time series of operational event data. A neural network may be applied to the time series of operational event data, and the time series of operational event data may be flagged for follow-up if the neural network classifies the time series as relating to a positive code regression category. | 2020-09-17 |
20200293901 | ADVERSARIAL INPUT GENERATION USING VARIATIONAL AUTOENCODER - A computer-implemented method, computer program product, and computer processing system are provided for generating an adversarial input. The method includes reducing, by a Conditional Variational Encoder, a dimensionality of each of inputs to a target algorithm to obtain a set of latent variables. The method further includes separately training, by a processor, (i) a successful predictor with a first subset of the latent variables as a first input for which the target algorithm succeeds and (ii) an unsuccessful predictor with a second subset of the latent variables as a second input for which the target algorithm fails. Both the successful and the unsuccessful predictors predict outputs of the target algorithm. The method also includes sampling, by the processor, an input that is likely to make the target algorithm fail as the adversarial input by using a likelihood of the successful predictor and the unsuccessful predictor. | 2020-09-17 |
20200293902 | SYSTEMS AND METHODS FOR MUTUAL LEARNING FOR TOPIC DISCOVERY AND WORD EMBEDDING - Described herein are embodiments for systems and methods for mutual machine learning with global topic discovery and local word embedding. Both topic modeling and word embedding map documents onto a low-dimensional space, with the former clustering words into a global topic space and the latter mapping word into a local continuous embedding space. Embodiments of Topic Modeling and Sparse Autoencoder (TMSA) framework unify these two complementary patterns by constructing a mutual learning mechanism between word co-occurrence based topic modeling and autoencoder. In embodiments, word topics generated with topic modeling are passed into auto-encoder to impose topic sparsity for the autoencoder to learn topic-relevant word representations. In return, word embedding learned by autoencoder is sent back to topic modeling to improve the quality of topic generations. Performance evaluation on various datasets demonstrates the effectiveness of the disclosed TMSA framework in discovering topics and embedding words. | 2020-09-17 |
20200293903 | METHOD FOR OBJECT DETECTION USING KNOWLEDGE DISTILLATION - A method that may include training a student ODNN to mimic a teacher ODNN. The training may include calculating a teacher student detection loss that is based on a pre-bounding-box output of the teacher ODNN. The pre-bounding-box output of the teacher ODNN is a function of pre-bounding-box outputs of different ODNNs that belong to the teacher ODNN. The method may also include detecting one or more objects in an image, by feeding the image to the trained student ODNN; outputting by the trained student ODNN a student pre-bounding-box output; and calculating one or more bounding boxes based on the student pre-bounding-box output. | 2020-09-17 |
20200293904 | METHOD FOR OBJECT DETECTION USING KNOWLEDGE DISTILLATION - A method that may include training a student ODNN to mimic a teacher ODNN. The training may include calculating a teacher student detection loss that is based on a pre-bounding-box output of the teacher ODNN. The pre-bounding-box output of the teacher ODNN is a function of pre-bounding-box outputs of different ODNNs that belong to the teacher ODNN. The method may also include detecting one or more objects in an image, by feeding the image to the trained student ODNN; outputting by the trained student ODNN a student pre-bounding-box output; and calculating one or more bounding boxes based on the student pre-bounding-box output. | 2020-09-17 |
20200293905 | METHOD AND APPARATUS FOR GENERATING NEURAL NETWORK - Embodiments of the present disclosure relate to a method and apparatus for generating a neural network. The method includes: acquiring a target neural network, the target neural network corresponding to a preset association relationship, and being configured to use two entity vectors corresponding to two entities in a target knowledge graph as an input, to determine whether an association relationship between the two entities corresponding to the inputted two entity vectors is the preset association relationship, the target neural network comprising a relational tensor predetermined for the preset association relationship; converting the relational tensor in the target neural network into a product of a target number of relationship matrices, and generating a candidate neural network comprising the target number of converted relationship matrices; and generating a resulting neural network using the candidate neural network. | 2020-09-17 |
20200293906 | DEEP FOREST MODEL DEVELOPMENT AND TRAINING - Automated development and training of deep forest models for analyzing data by growing a random forest of decision trees using data, determining Out-of-bag (OOB) predictions for the forest, appending the OOB predictions to the data set, and growing an additional forest using the data set including the appended OOB predictions, and combining the output of the additional forest, then utilizing the model to classify data outside the training data set. | 2020-09-17 |
20200293907 | LEARNING DEVICE AND LEARNING METHOD - A learning device includes a data storage unit configured to store learning data for learning a decision tree; a learning unit configured to determine whether to cause learning data stored in the data storage unit to branch to one node or to the other node of lower nodes of a node based on a branch condition for the node of the decision tree; and a first buffer unit and a second buffer unit configured to buffer learning data determined to branch to the one node and the other node, respectively, by the learning unit up to capacity determined in advance. The first buffer unit and the second buffer unit are configured to, in response to buffering learning data up to the capacity determined in advance, write the learning data into continuous addresses of the data storage unit for each predetermined block. | 2020-09-17 |
20200293908 | PERFORMING DATA PROCESSING BASED ON DECISION TREE - Disclose herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for data processing. One of the methods includes: determining, by a first computing device based on service data possessed by the first computing device, whether a leaf value of a leaf node of a decision tree at least possibly matches information included in the service data; in response to determining that the leaf value at least possibly matches the information included in the first service data, determining; a first data selection value corresponding to the leaf node; and performing oblivious transfer with a second computing device that processes a decision tree model of the decision tree by using the first data selection value as an input to obtain first target data for determining a prediction result of the decision forest. | 2020-09-17 |
20200293909 | FACTOR ANALYSIS DEVICE, FACTOR ANALYSIS METHOD, AND STORAGE MEDIUM ON WHICH PROGRAM IS STORED - Provided is a factor analysis device capable of obtaining more useful knowledge relating to the degree of influence of pieces of data. A factor analysis device according to one embodiment of the present invention is provided with: a classification unit for classifying a type of data into a first group or a second group; and an influence degree calculation unit for calculating, as the degree of influence on target data, the degree of influence of the data of the type classified into the second group on the data of the first group type. | 2020-09-17 |
20200293910 | SYSTEM FOR AUTOMATIC DEDUCTION AND USE OF PREDICTION MODEL STRUCTURE FOR A SEQUENTIAL PROCESS DATASET - A sub-process sequence is identified from a temporal dataset. Based on time information, predictors are categorized as being available or not available during time periods. The predictors are used to make predictions of quantities that will occur in a future time period. The predictors are grouped into groups of a sequence of sub-processes, each including a grouping of one or more of the predictors. Information is output that allows a human being to modify the groups. The groups are finalized, responsive to any modifications. Prediction models are extracted based on dependencies between groups and sub-processes. A final predication model is determined based on a prediction model from the prediction models that best meets criteria. A dependency graph is generated based on the final prediction model. Information is output to display the final dependency graph for use by a user to adjust or not adjust elements of the sequential process. | 2020-09-17 |
20200293911 | PERFORMING DATA PROCESSING BASED ON DECISION TREE - Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for data processing. One of the methods includes: determining target location identifiers identifying leaf nodes of a decision tree in a decision forest based on parameter information of the decision tree; performing oblivious transfer with a second computing device by using the target location identifiers as input; and selecting a target cyphertext from cyphertexts of leaf values corresponding to leaf nodes of the decision tree, wherein the cyphertexts are generated by encrypting the leaf values based on a random number and are used by the second computing device to perform the oblivious transfer. | 2020-09-17 |
20200293912 | COMPUTER-IMPLEMENTED DECISION MANAGEMENT SYSTEMS AND METHODS - Computer-implemented decision management systems and methods are provided. The method comprises obtaining information associated with factors usable for making a decision from among a plurality of inter-related decisions represented by a plurality of corresponding nodes. The computing environment provides access to resources that store information about relationships among the plurality of nodes. A relationship may be presentable as an edge connecting at least two nodes from among the plurality of nodes. The strength of the relationship between the at least two nodes is measurable and definable based on associations between the inter-related decisions. A valued may be determined that provides a measure for the strength of the relationship between the at least two nodes based on the information associated with the factors and the information about the relationships among the plurality of nodes. | 2020-09-17 |
20200293913 | INFORMATION DELIVERY PLATFORM - An information delivery system allows for the organization and presentation of information to users. Illustratively, aspects of the disclosure correspond to a system and method which provides for interactive information delivery, or interactive learning. More particularly, a platform is disclosed which provides an independent interactive interface for content delivery and e-learning and for creation of teaching or learning presentations. | 2020-09-17 |
20200293914 | NATURAL LANGUAGE GENERATION BY AN EDGE COMPUTING DEVICE - Systems and methods for natural language generation by an edge computing device are disclosed. In one embodiments, a method comprises: receiving, by an edge computing device, event data from an edge event; determining, by the edge computing device, that a network connection to a cloud server is not available; extracting, by the edge computing device, features of the event data; predicting, by a local neural network of the edge computing device, an action for the edge computing device to take based on the features of the event data, wherein the action is associated with a confidence level; and determining, by the edge computing device, whether the confidence level meets a predetermined threshold value. | 2020-09-17 |
20200293915 | DYNAMICALLY UPDATEABLE RULES ENGINE - A system includes a plurality of sensors; a dynamically updateable rules engine coupled to the plurality of sensors; a data collection management module coupled to the dynamically updateable rules engine and the plurality of sensors; and a data storage and analysis inference module coupled to the data collection management module, the dynamically updateable rules engine and the plurality of sensors. Data from the plurality of sensors that is received by the dynamically updateable rules engine is transformed by the dynamically updateable rules engine by selectively executing rules based on conditions or events. The dynamically updateable rules engine is updated by the data storage and analysis inference module. | 2020-09-17 |
20200293916 | DISTRIBUTED SYSTEM GENERATING RULE COMPILER ENGINE APPARATUSES, METHODS, SYSTEMS AND MEDIA - An output rule specified via a distributed system execution request data structure for a requested calculation is determined, and a current rule is initialized to the output rule. A rule lookup table data structure is queried to determine a set of matching rules, corresponding to the current rule. The best matching rule is selected. A logical dependency graph (LDG) data structure is generated by adding LDG nodes and LDG edges corresponding to the best matching rule, precedent rules of the best matching rule, and precedent rules of each precedent rule. An execution complexity gauge value and a set of distributed worker processes are determined. The LDG data structure is divided into a set of subgraphs. Each worker process is initialized with the subgraph assigned to it. Execution of the requested calculation is coordinated and a computation result of the LDG node corresponding to the output rule is obtained. | 2020-09-17 |
20200293917 | ENHANCEMENT OF MACHINE LEARNING-BASED ANOMALY DETECTION USING KNOWLEDGE GRAPHS - Technologies are disclosed herein for enhancing machine learning (“ML”)-based anomaly detection systems using knowledge graphs. The disclosed technologies generate a connected graph that defines a topology of infrastructure components along with associated alarms generated by a ML component. The ML component generates the alarms by applying ML techniques to real-time data metrics generated by the infrastructure components. Scores are computed for the infrastructure components based upon the connected graph. A root cause of an anomaly affecting infrastructure components can then be identified based upon the scores, and remedial action can be taken to address the root cause of the anomaly. A user interface is also provided for visualizing aspects of the connected graph. | 2020-09-17 |
20200293918 | Augmented Intelligence System Assurance Engine - A method, system and computer-readable storage medium for cognitive information processing. The cognitive information processing includes: receiving data from a plurality of data sources; processing the data from the plurality of data sources to provide cognitively processed insights via an augmented intelligence system, the augmented intelligence system executing on a hardware processor of an information processing system, the augmented intelligence system and the information processing system providing a cognitive computing function; performing an augmented intelligence assurance operation, the augmented intelligence assurance operation ensuring augmented intelligence performance of the cognitive computing function; and, providing the cognitively processed insights to a destination, the destination comprising a cognitive application, the cognitive application enabling a user to interact with the cognitive insights. | 2020-09-17 |
20200293919 | Augmented Intelligence System Impartiality Assessment Engine - A method, system and computer-readable storage medium for performing a cognitive information processing operation. The cognitive information processing operation includes: receiving data from a plurality of data sources; processing the data from the plurality of data sources to provide cognitively processed insights via an augmented intelligence system, the augmented intelligence system executing on a hardware processor of an information processing system, the augmented intelligence system and the information processing system providing a cognitive computing function; performing an impartiality assessment operation via an impartiality assessment engine, the impartiality assessment operation detecting a presence of bias in an outcome of the cognitive computing function; and, providing the cognitively processed insights to a destination, the destination comprising a cognitive application, the cognitive application enabling a user to interact with the cognitive insights. | 2020-09-17 |
20200293920 | RAPID PREDICTIVE ANALYSIS OF VERY LARGE DATA SETS USING THE DISTRIBUTED COMPUTATIONAL GRAPH USING CONFIGURABLE ARRANGEMENT OF PROCESSING COMPONENTS - A system for predictive analysis of very large data sets using a distributed computational graph that intelligently combines processing of a current data stream with the ability to retrieve relevant stored data in such a way that conclusions or actions may be drawn in a predictive manner. The system has a pipeline construction module that allows a user to construct a streaming analytic workflow using modular building blocks, each of which represents either an environmental orchestration stage or a data processing stage of a streaming analytic workflow, and has a pipeline processing module that receives a data stream and constructs a directed computational graph by processing the data stream through the streaming analytic workflow. The directed computational graph is used to analyze the data stream. | 2020-09-17 |
20200293921 | VISUAL QUESTION ANSWERING MODEL, ELECTRONIC DEVICE AND STORAGE MEDIUM - Embodiments of the present disclosure disclose a visual question answering model, an electronic device and a storage medium. The visual question answering model includes an image encoder and a text encoder. The text encoder is configured to perform pooling on a word vector sequence of a question text inputted, so as to extract a semantic representation vector of the question text; and the image encoder is configured to extract an image feature of a given image in combination with the semantic representation vector. | 2020-09-17 |
20200293922 | APPARATUS AND METHOD FOR MULTIVARIATE PREDICTION OF CONTACT CENTER METRICS USING MACHINE LEARNING - In a predictor device, a method for predicting a metric of a contact center includes receiving contact center operational data associated with a time duration; training a set of algorithms and their available hyperparameters with the contact center operational data to generate a set of data models; generating a score associated with each data model of the set of data models, the score quantifying a performance of each algorithm and its available hyperparameters on the contact center operational data; identifying the data model having the largest score as a best learning model for the time duration; and generating a contact center metric prediction based on the best learning model for the time duration. | 2020-09-17 |
20200293923 | PREDICTIVE RFM SEGMENTATION - A system and a method are disclosed for adjusting communication settings based on user segmentation. An activity-based communication management system retrieves behavioral and demographic data of at least one user. The system inputs the behavioral data and the demographic data into machine learning models. For each of the machine learning models, the system receives a respective activity parameter characterizing a predicted activity occurring within a time window. The system determines, based on the received activity parameters, a category to which the behavioral data and demographic data belong. The system subsequently adjusts a plurality of communication settings based on the determined category. The activity-based communication management system may provide user segmentation using both empirical activity parameters (e.g., historical behavioral data) and predicted activity parameters. | 2020-09-17 |
20200293924 | GBDT MODEL FEATURE INTERPRETATION METHOD AND APPARATUS - Implementations of the present specification disclose methods, devices, and apparatuses for determining a feature interpretation of a predicted label value of a user generated by a GBDT model. In one aspect, the method includes separately obtaining, from each of a predetermined quantity of decision trees ranked among top decision trees, a leaf node and a score of the leaf node; determining a respective prediction path of each leaf node; obtaining, for each parent node on each prediction path, a split feature and a score of the parent node; determining, for each child node on each prediction path, a feature corresponding to the child node and a local increment of the feature on the child node; obtaining a collection of features respectively corresponding to the child nodes; and obtaining a respective measure of relevance between the feature corresponding to the at least one child node and the predicted label value. | 2020-09-17 |
20200293925 | DISTRIBUTED LEARNING MODEL FOR FOG COMPUTING - The disclosed technology relates to a process for metered training of fog nodes within the fog layer. The metered training allows the fog nodes to be continually trained within the fog layer without the need for the cloud. Furthermore, the metered training allows the fog node to operate normally as the training is performed only when spare resources are available at the fog node. The disclosed technology also relates to a process of sharing better trained machine learning models of a fog node with other similar fog nodes thereby speeding up the training process for other fog nodes within the fog layer. | 2020-09-17 |
20200293926 | Infinite Edge-Computing Oracle - The edge-computing and edge-exchanging individual and individual groups of mobile*_n-to-mobile*_n communication devices for rendering the Infinite edge-computing and edge-exchanging Oracle possessing abiotic thinking and reasoning for rendering definitive and specific continuum*_n of vital goods vital goods, products, services and resources in most industries. | 2020-09-17 |
20200293927 | Augmented Intelligence System Robustness Assessment Engine - A method, system and computer-readable storage medium for performing a cognitive information processing operation. The cognitive information processing operation includes: receiving data from a plurality of data sources; processing the data from the plurality of data sources to provide cognitively processed insights via an augmented intelligence system, the augmented intelligence system executing on a hardware processor of an information processing system, the augmented intelligence system and the information processing system providing a cognitive computing function; performing a robustness assessment operation via a robustness assessment engine, the robustness assessment operation assessing robustness of the cognitive computing function; and, providing the cognitively processed insights to a destination, the destination comprising a cognitive application, the cognitive application enabling a user to interact with the cognitive insights. | 2020-09-17 |
20200293928 | Cognitive Agent Composition Platform - A system, method, and computer-readable medium are disclosed for cognitive information processing. The cognitive information processing includes receiving data from a plurality of data sources; processing the data from the plurality of data sources via an augmented intelligence system, the augmented intelligence system executing on a hardware processor of an information processing system, the augmented intelligence system and the information processing system providing a cognitive computing function, the cognitive computing function comprising a cognitive agent, the cognitive agent being composed via a cognitive agent composition platform, the cognitive agent performing a task, the cognitive agent performing the task with non-specific guidance from a user, the cognitive agent learning from each interaction with the data and the user; and, using the cognitive agent to generate a cognitive insight, the cognitive agent comprising a deployable module, the deployable module comprising logic, data and models for implementing an augmented intelligence operation. | 2020-09-17 |
20200293929 | INFERENCE SYSTEM, INFERENCE METHOD, AND RECORDING MEDIUM - An inference method according to the present invention in an inference system inferring a probability that an ending state holds based on a starting state and a rule set, the method includes: when a rule set derived by excluding one rule from rules constituting a first rule set is set as a second rule set, a probability that the ending state holds based on the starting state and the first rule set is set as a first inference result, and a probability that the ending state holds based on the starting state and the second rule set is set as a second inference result, calculating an importance being an indicator indicating magnitude of a difference between the first inference result and the second inference result; and outputting the rule and the importance of the rule, being associated with each other for each of the excluded rule. | 2020-09-17 |