Entries |
Document | Title | Date |
20080222064 | Processes and Systems for Automated Collective Intelligence - The present invention relates to the field of collective intelligence. More specifically, to the collaborative acquisition of knowledge and the relationships among said knowledge and the application of acquired knowledge and relationships to solving problems. The present invention presents an interface to a community of users that will create nodes and relationships in an artificial neural network and then weight each node and relationship through votes from one or more users. | 09-11-2008 |
20080235169 | Protective, Compact Cover for Topographic Maps and Other Large-Format Documents - This folding, compact document cover is an apparatus that practically and conveniently protects a specially folded U.S. Geological Survey (USGS) topographic map or virtually any other large-format document with its French-folded binding and easy to handle protective element that is capable of allowing the reader to flip between quadrants without the hassle of continued folding and refolding or rolling and unrolling. | 09-25-2008 |
20080243733 | RATING MEDIA ITEM RECOMMENDATIONS USING RECOMMENDATION PATHS AND/OR MEDIA ITEM USAGE - A media item recommendation rating system and method. A recommendation rating for media items is established and dynamically updated in response to media items being recommended to other users. A recommendation server or other device receives a report of a media item recommendation and updates a recommendation rating in response. The recommendation rating may also be updated based on how often a recommended media item is used or played. Thus, a media item's recommendation rating is affected by events relating to its recommendation, as opposed simple play-based ratings that are updated on any play action regardless of whether related to a recommendation or not. Simple play-based ratings do not distinguish between ordinary usages or plays and those resulting from recommendations. Recommendation of a media item to another user may be a better indicator of the user's likeability or popularity of a given media item, since a recommendation is an endorsement by another. | 10-02-2008 |
20080243734 | Method for computer-assisted processing of measured values detected in a sensor network - There is described a method for computer-assisted processing of measured values detected in a sensor network, with the sensor network comprising a plurality of sensor nodes, which each feature one or more sensors for detection of the measured values, with the measured values of a number of adjacent sensor nodes being known in a sensor node. A multi-area neural network will be mapped onto a corresponding sensor network by the inventive method, which creates the opportunity, with the aid of the information from adjacent sensors, even with incorrect or failed measurements of a sensor node, of guaranteeing detection of a global situation at the location of the sensor node. A sensor network operated with such a method is in such cases more robust against the failure of a few sensors, since a corresponding measured value can be estimated in a suitable way, so that the measurement not available can be replaced by the estimated measured value. The individual sensors of the sensor nodes can thus be of a simpler construction with the same level of robustness of the sensor network, since failures of sensors have less effect on the functional integrity of the sensor network. | 10-02-2008 |
20080262989 | MULTIPLEX DATA COLLECTION AND ANALYSIS IN BIOANALYTE DETECTION - Method and device to collect multiplex data simultaneously in analyte detection and analyze the data by experimentally trained software (machine-learning) is disclosed. Various ways (magnetic particles and microcoils) are disclosed to collect multiple reporter (tag) signals. Multiplex detection can increase the biomolecule analysis efficiency by using small sample size and saving assay reagents and time. Machine learning and data analysis schemes are also disclosed. Multiple affinity binding partners, each labeled by a unique reporter, are contacted with a sample and a single spectrum is taken to detect multiple reporter signals. The spectrum is deconvoluted by experimentally trained software to identify multiple analytes. | 10-23-2008 |
20080270333 | SYSTEM AND METHOD FOR DETERMINING SEMANTICALLY RELATED TERMS USING AN ACTIVE LEARNING FRAMEWORK - Systems and methods for determining semantically related terms using an active learning framework such as Transductive Experimental Design are disclosed. Generally, to enhance a keyword suggestion tool, an active learning module trains a model to predict whether a term is relevant to a user. The model is then used to present the user with terms that have been determined to be relevant based on the model so that an online advertisement service provider may more efficiently provide a user with terms that are semantically related to a seed set. | 10-30-2008 |
20080275828 | Method and system for independently observing and modifying the activity of an actor processor - A system, method, and computer program product for observing and modifying activity in an actor processor are presented. An observer module is provided for observing a physical property of an actor processor. The observer module comprises a property-observing sensor for detecting and sampling a physical property of the actor processor and for generating an observation signal based on the physical property. The observer module further comprises an observer processor coupled with the property-observing sensor for receiving the observation signal, the observer processor operative to generate an observer output signal based on the observation signal. The observer module permits the observer processor to monitor the actor processor in a manner that isolates an instruction set of the observer processor from direct manipulation by means of an instruction set of the actor processor. Observer processors may be used in a recursive manner to provide a completely-coupled observation module. | 11-06-2008 |
20080306892 | MULTIPHASE FLOW METER FOR ELECTRICAL SUBMERSIBLE PUMPS USING ARTIFICIAL NEURAL NETWORKS - A multiphase flow meter used in conjunction with an electrical submersible pump system in a well bore includes sensors to determine and transmit well bore pressure measurements, including tubing and down hole pressure measurements. The multiphase flow meter also includes at least one artificial neural network device to be used for outputting flow characteristics of the well bore. The artificial neural network device is trained to output tubing and downhole flow characteristics responsive to multiphase-flow pressure gradient calculations and pump and reservoir models, combined with standard down-hole pressure, tubing surface pressure readings, and the frequency applied to the electrical submersible pump motor. | 12-11-2008 |
20090119236 | NEURAL NETWORKS WITH LEARNING AND EXPRESSION CAPABILITY - A neural network comprising a plurality of neurons in which any one of the plurality of neurons is able to associate with itself or another neuron in the plurality of neurons via active connections to a further neuron in the plurality of neurons. | 05-07-2009 |
20090157576 | MEASURING AND LIMITING DISTRACTIONS ON COMPUTER DISPLAYS - Techniques are described herein for determining a distractibility measure for an item to be displayed on a display. The distractibility measure for an item is determined based on the individual distractibility measures for one or more of: the static distraction of the item, the onset response of the item, the optic-flow motion of the item, and the change in velocity of objects in the item. Each individual distractibility measure can be further multiplied by a weighting factor which affects the composition of the distractibility measure for the item. The distractibility measure for the item can be further based on the size of the item, how far away the item is from a primary content on the display, and the distractibility measure of the primary content on the display. The distractibility measure for the item can be compared to a maximum level of distractibility for automatically determining whether the item should be displayed on the display. Finally, the techniques described herein can be combined with other techniques which detect specific types of visual content in an item. | 06-18-2009 |
20090157577 | METHOD AND APPARATUS FOR OPTIMIZING MODELS FOR EXTRACTING DOSE AND FOCUS FROM CRITICAL DIMENSION - A method includes defining a reference model of a system having a plurality of terms for modeling data associated with the system. A reference fit error metric is generated for the reference model. A set of evaluation models each having one term different than the reference model is generated. An evaluation fit error metric for each of the evaluation models is generated. The reference model is replaced with a selected evaluation model responsive to the selected evaluation model having an evaluation fit error metric less than the reference fit error metric. The model evaluation is repeated until no evaluation model has an evaluation fit error metric less than the reference fit error metric. The reference model is trained using the data associated with the system, and the trained reference model is employed to determine at least one characteristic of the system. | 06-18-2009 |
20090164395 | MODELING, DETECTING, AND PREDICTING USER BEHAVIOR WITH HIDDEN MARKOV MODELS - Mechanisms model, detect, and predict user behavior as a user navigates the Web. In one embodiment, mechanisms model user behavior using predictive models, such as discrete Markov processes, where the user's behavior transitions between a finite number of states. The user's behavior state may not be directly observable (e.g., a user does not proactively indicate what behavior state he is in). Thus, the behavior state of a user is usually only indirectly observable. Mechanisms use predictive models, such as hidden Markov models, to predict the transitions in the user's behavior states. | 06-25-2009 |
20090164396 | NETWORK ANALYZER - This invention relates to using artificial intelligence for analyzing real-life collected data from an operation system, modeling the collected data to identify characteristics of events, analyzing the models to conclude an optimal solution for maximizing the performance of the operation system. | 06-25-2009 |
20090177601 | STATUS-AWARE PERSONAL INFORMATION MANAGEMENT - Described is a technology by which personal information that comes into a computer system is intelligently managed according to current state data including user presence and/or user attention data. Incoming information is processed against the state data to determine whether corresponding data is to be output, and if so, what output modality or modalities to use. For example, if a user is present and busy, a notification may be blocked or deferred to avoid disturbing the user. Cost analysis may be used to determine the cost of outputting the data. In addition to user state data, the importance of the information, other state data, the cost of converting data to another format for output (e.g., text-to-speech), and/or user preference data, may factor into the decision. The output data may be modified (e.g., audio made louder) based on a current output environment as determined via the state data. | 07-09-2009 |
20090254502 | FEEDBACK SYSTEMS AND METHODS FOR RECOGNIZING PATTERNS - Pattern classification systems and methods are disclosed. The pattern classification systems and methods employ one or more classification networks that can parse multiple patterns simultaneously while providing a continuous feedback about its progress. Pre-synaptic inhibition is employed to inhibit feedback connections to permit more flexible processing. Various additional improvements result in highly robust pattern recognition systems and methods that are suitable for use in research, development, and production. | 10-08-2009 |
20090259606 | DIVERSIFIED, SELF-ORGANIZING MAP SYSTEM AND METHOD - A diversified, self-organizing map (SOM) system and method creates a number of special-purpose SOMs by filtering and training from a SOM Database which contains user preference data entries that include a wide range of fields or attributes of user preferences. Each special-purpose SOM is trained with a filtered subset of user preference data for fields and attributes related to its special purpose. Two or more special-purpose SOMs are harnessed inter-cooperatively together to provide recommendations of preferred items in response to queries. Multiple SOMs can be maintained at different websites and harnessed together through a global SOM interface. The system can function more efficiently than a single large SOM using a monolithic database with single-type data entries of large dimensionality. | 10-15-2009 |
20100017351 | NEURAL NETWORK BASED HERMITE INTERPOLATOR FOR SCATTEROMETRY PARAMETER ESTIMATION - Generation of a meta-model for scatterometry analysis of a sample diffracting structure having unknown parameters. A training set comprising both a spectral signal evaluation and a derivative of the signal with respect to at least one parameter across a parameter space is rigorously computed. A neural network is trained with the training set to provide reference spectral information for a comparison to sample spectral information recorded from the sample diffracting structure. A neural network may be trained with derivative information using an algebraic method wherein a network bias vector is centered over both a primary sampling matrix and an auxiliary sampling matrix. The result of the algebraic method may be used for initializing neural network coefficients for training by optimization of the neural network weights, minimizing a difference between the actual signal and the modeled signal based on a objective function containing both function evaluations and derivatives. | 01-21-2010 |
20100049680 | METHOD FOR PROJECTING WAFER PRODUCT OVERLAY ERROR AND WAFER PRODUCT CRITICAL DIMENSION - A method for projecting wafer product overlay error of the present invention is disclosed, the steps of the method comprises:(a) sample equipment overlay error data, equipment condition data, and actual wafer product overlay error data; (b) establish a neural network, the equipment overlay error data and the equipment condition data are inputs of the neural network, the generated output of the neural network is projected wafer product overlay error data, and the actual wafer product overlay error data is the target output of the neural network; and (c) set a mean square error target, train the neural network continuously until the mean square error of the neural network is no longer bigger than the mean square error target. Additionally a method for projecting wafer product critical dimension is also presented in the present invention. | 02-25-2010 |
20100185573 | Method and Apparatus for Diagnosing an Allergy of the Upper Respiratory Tract Using a Neural Network - The invention relates to a method and means for performing a diagnosis of a medical condition and, in particular, an allergy associated with the upper respiratory tract, using an artificial neural network. | 07-22-2010 |
20110029470 | SYSTEMS, METHODS, AND APPARATUS FOR RECONSTRUCTION OF 3-D OBJECT MORPHOLOGY, POSITION, ORIENTATION AND TEXTURE USING AN ARRAY OF TACTILE SENSORS - Systems, methods, and apparatus are provided using signals from a set of tactile sensors mounted on a surface to determine the three-dimensional morphology (e.g., size, shape, orientation, and/or position) and texture of objects of arbitrary shape. Analytical, numerical, and/or neural network approaches can be used to interpret the sensory data. | 02-03-2011 |
20110040713 | MEDICAL SYSTEM, APPARATUS AND METHOD - There is provided a method of generating a pulmonary index value of a patient, which includes receiving two or more measured patient parameters, wherein at least one of the measured parameters originates from a pulmonary sensor; and computing the pulmonary index value based on the two or more measured patient parameters. | 02-17-2011 |
20120166374 | ARCHITECTURE, SYSTEM AND METHOD FOR ARTIFICIAL NEURAL NETWORK IMPLEMENTATION - Systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the artificial neural network by providing scalability to neurons and layers. In a particular case, the systems and methods may include a back-propagation subsystem that is configured to scalably adjust weights in the artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements. | 06-28-2012 |
20120233103 | SYSTEM FOR APPLICATION PERSONALIZATION FOR A MOBILE DEVICE - A system for controlling applications of a wireless mobile device includes a server for receiving data related to an adaptive user profile and for controlling operations of applications within the wireless mobile device. An adaptive neural/fuzzy logic control application implemented within the network server generates the adaptive user profile responsive to the received data. The adaptive user profile controls operations of the applications within the wireless mobile device and changes in real time responsive to the received data. | 09-13-2012 |
20120330869 | Mental Model Elicitation Device (MMED) Methods and Apparatus - A mental-model elicitation process and apparatus, called the Mental-Model Elicitation Device (MMED) is described. The MMED is used to give rise to more effective end-user mental-modeling activities that require executive function and working memory functionality. The method and apparatus is visual analysis based, allowing visual and other sensory representations to be given to thoughts, attitudes, and interpretations of a user about a given visualization of a mental-model, or aggregations of such visualizations and their respective blending. Other configurations of the apparatus and steps of the process may be created without departing from the spirit of the invention as disclosed. | 12-27-2012 |
20130085972 | METHOD FOR ACQUIRING PROCESS PARAMETERS FOR A FILM WITH A TARGET TRANSMITTANCE - In a method for acquiring process parameters for a film, a computer divides parameter sets into a training data group and a test data group. Then, the computer inputs the training data group to a neural network (NN) so as to obtain relationship among parameter sets of the training data group and transmittances, and uses the test data group to estimate accuracy of the NN. Further, the computer modifies the NN until an error value of estimated parameters, which are acquired by the NN according to the obtained relationship, is smaller than a predetermined value, and uses the NN to acquire practical parameters corresponding to a target transmittance when the error value is smaller than the predetermined value. | 04-04-2013 |
20130103626 | METHOD AND APPARATUS FOR NEURAL LEARNING OF NATURAL MULTI-SPIKE TRAINS IN SPIKING NEURAL NETWORKS - Certain aspects of the present disclosure support a technique for neural learning of natural multi-spike trains in spiking neural networks. A synaptic weight can be adapted depending on a resource associated with the synapse, which can be depleted by weight change and can recover over time. In one aspect of the present disclosure, the weight adaptation may depend on a time since the last significant weight change. | 04-25-2013 |
20130159229 | MULTI-MODAL NEURAL NETWORK FOR UNIVERSAL, ONLINE LEARNING - In one embodiment, the present invention provides a neural network comprising multiple modalities. Each modality comprises multiple neurons. The neural network further comprises an interconnection lattice for cross-associating signaling between the neurons in different modalities. The interconnection lattice includes a plurality of perception neuron populations along a number of bottom-up signaling pathways, and a plurality of action neuron populations along a number of top-down signaling pathways. Each perception neuron along a bottom-up signaling pathway has a corresponding action neuron along a reciprocal top-down signaling pathway. An input neuron population configured to receive sensory input drives perception neurons along a number of bottom-up signaling pathways. A first set of perception neurons along bottom-up signaling pathways drive a first set of action neurons along top-down signaling pathways. Action neurons along a number of top-down signaling pathways drive an output neuron population configured to generate motor output. | 06-20-2013 |
20130166484 | SYSTEMS, METHODS, AND APPARATUS FOR 3-D SURFACE MAPPING, COMPLIANCE MAPPING, AND SPATIAL REGISTRATION WITH AN ARRAY OF CANTILEVERED TACTILE HAIR OR WHISKER SENSORS - Systems, methods, and apparatus are provided using signals from a set of tactile sensors mounted on a surface to determine a surface topography. An example method includes receiving a set of moment and force input data from one or more identified topographies. The example method includes using a neural network to receive input from a training data set based on the first set of moment and force input data from the one or more identified topographies. Network weights to be used by the neural network to produce the training data set are modified via an evolutionary algorithm that tests vectors of candidate network weights. The example method includes receiving a moment and force input from a test object surface and reconstructing the surface topology based on the neural network outputs. | 06-27-2013 |
20130204814 | METHODS AND APPARATUS FOR SPIKING NEURAL COMPUTATION - Certain aspects of the present disclosure provide methods and apparatus for spiking neural computation of general linear systems. One example aspect is a neuron model that codes information in the relative timing between spikes. However, synaptic weights are unnecessary. In other words, a connection may either exist (significant synapse) or not (insignificant or non-existent synapse). Certain aspects of the present disclosure use binary-valued inputs and outputs and do not require post-synaptic filtering. However, certain aspects may involve modeling of connection delays (e.g., dendritic delays). A single neuron model may be used to compute any general linear transformation x=AX+BU to any arbitrary precision. This neuron model may also be capable of learning, such as learning input delays (e.g., corresponding to scaling values) to achieve a target output delay (or output value). Learning may also be used to determine a logical relation of causal inputs. | 08-08-2013 |
20130204815 | METHOD FOR THE COMPUTER-AIDED LEARNING OF A RECURRENT NEURAL NETWORK FOR MODELING A DYNAMIC SYSTEM - A method for the computer-aided learning of a recurrent neural network for modeling a dynamic system which is characterized at respective times by an observable vector with one or more observables as entries is provided. The neural network includes both a causal network with a flow of information that is directed forwards in time and a retro-causal network with a flow of information which is directed backwards in time. The states of the dynamic system are characterized by first state vectors in the causal network and by second state vectors in the retro-causal network, wherein the state vectors each contain observables for the dynamic system and also hidden states of the dynamic system. Both networks are linked to one another by a combination of the observables from the relevant first and second state vectors and are learned on the basis of training date including known observables vectors. | 08-08-2013 |
20130212051 | NONDESTRUCTIVE METHOD TO PREDICT ISOSTATIC STRENGTH IN CERAMIC SUBSTRATES - A method of examining a cellular structure includes the steps of providing an inspecting device, a neural network and a target cellular structure that includes a plurality of target cells extending therethrough and further includes a target face exposing an arrangement of the target cells; inspecting the arrangement of cells on the face of the target cellular structure using the inspecting device; representing the arrangement of cells with numerically defined target cell parameters; inputting the target cell parameters into the neural network; and generating an output from the neural network based on the target cell parameters, the output being indicative of a strength of the target cellular structure. | 08-15-2013 |
20130304681 | METHOD AND APPARATUS FOR STRATEGIC SYNAPTIC FAILURE AND LEARNING IN SPIKING NEURAL NETWORKS - Certain aspects of the present disclosure support a technique for strategic synaptic failure and learning in spiking neural networks. A synaptic weight for a synaptic connection between a pre-synaptic neuron and a post-synaptic neuron can be first determined (e.g., according to a learning rule). Then, one or more failures of the synaptic connection can be determined based on a set of characteristics of the synaptic connection. The one or more failures can be omitted from computation of a neuronal behavior of the post-synaptic neuron. | 11-14-2013 |
20130311412 | ENCODING AND DECODING MACHINE WITH RECURRENT NEURAL NETWORKS - Techniques for reconstructing a signal encoded with a time encoding machine (TEM) using a recurrent neural network including receiving a TEM-encoded signal, processing the TEM-encoded signal, and reconstructing the TEM-encoded signal with a recurrent neural network. | 11-21-2013 |
20130318017 | Devices for Learning and/or Decoding Messages, Implementing a Neural Network, Methods of Learning and Decoding and Corresponding Computer Programs - A learning and decoding technique is provided for a neural network. The technique involves using a set of neurons, referred to as beacons, wherein said beacons are binary neurons capable of assuming only two states, an on state and an off state. The beacons are distributed in blocks of a predetermined number of beacons and being allocated for processing a sub-message. Each beacon is associated with a specific occurrence of the sub-message. Learning includes splitting a message into B sub-messages to be learned, where B is greater than or equal to two; activating, for a sub-message, a single beacon in each block to be in the on state, all of the other beacons of the block being in the off state; and activating binary connections between the on beacons of each of the block for a message to be learned, which assume only connected and disconnected states. | 11-28-2013 |
20130325767 | DYNAMICAL EVENT NEURON AND SYNAPSE MODELS FOR LEARNING SPIKING NEURAL NETWORKS - Certain aspects of the present disclosure provide methods and apparatus for a continuous-time neural network event-based simulation. This model is flexible, has rich behavioral options, can be solved directly, and is low complexity. One example method generally includes determining a first state of a neuron model at or shortly after a first event, wherein the neuron model has a closed-form solution in continuous time; and determining a second state of the neuron model at or shortly after a second event, based on the first state. Dynamics of the first and second states are coupled to the neuron model only at the first and second events, respectively, and are decoupled between the first and second events. | 12-05-2013 |
20130325768 | STOCHASTIC SPIKING NETWORK LEARNING APPARATUS AND METHODS - Generalized learning rules may be implemented. A framework may be used to enable adaptive spiking neuron signal processing system to flexibly combine different learning rules (supervised, unsupervised, reinforcement learning) with different methods (online or batch learning). The generalized learning framework may employ time-averaged performance function as the learning measure thereby enabling modular architecture where learning tasks are separated from control tasks, so that changes in one of the modules do not necessitate changes within the other. Separation of learning tasks from the control tasks implementations may allow dynamic reconfiguration of the learning block in response to a task change or learning method change in real time. The generalized spiking neuron learning apparatus may be capable of implementing several learning rules concurrently based on the desired control application and without requiring users to explicitly identify the required learning rule composition for that task. | 12-05-2013 |
20140012788 | CONDITIONAL PLASTICITY SPIKING NEURON NETWORK APPARATUS AND METHODS - Apparatus and methods for conditional plasticity in a neural network. In one approach, conditional plasticity mechanism is configured to select alternate plasticity rules when performing connection updates. The selection mechanism is adapted based on a comparison of actual connection efficiency and target efficiency. For instance, when actual efficiency is below the target value, the STDP rule may be modulated to increase long term potentiation. Similarly, when actual efficiency is above the target value, the STDP rule may be modulated to increase long term connection depression. The conditional plasticity mechanism dynamically adjusts connection efficacy, and prevents uncontrolled increase of connection weights, thereby improving network operation when processing information of a varying nature. | 01-09-2014 |
20140032457 | NEURAL PROCESSING ENGINE AND ARCHITECTURE USING THE SAME - A neural processing engine may perform processing within a neural processing system and/or artificial neural network. The neural processing engine may be configured to effectively and efficiently perform the type of processing required in implementing a neural processing system and/or an artificial neural network. This configuration may facilitate such processing with neural processing engines having an enhanced computational density and/or processor density with respect to conventional processing units. | 01-30-2014 |
20140032458 | APPARATUS AND METHODS FOR EFFICIENT UPDATES IN SPIKING NEURON NETWORK - Efficient updates of connections in artificial neuron networks may be implemented. A framework may be used to describe the connections using a linear synaptic dynamic process, characterized by stable equilibrium. The state of neurons and synapses within the network may be updated, based on inputs and outputs to/from neurons. In some implementations, the updates may be implemented at regular time intervals. In one or more implementations, the updates may be implemented on-demand, based on the network activity (e.g., neuron output and/or input) so as to further reduce computational load associated with the synaptic updates. The connection updates may be decomposed into multiple event-dependent connection change components that may be used to describe connection plasticity change due to neuron input. Using event-dependent connection change components, connection updates may be executed on per neuron basis, as opposed to per-connection basis. | 01-30-2014 |
20140032459 | APPARATUS AND METHODS FOR GENERALIZED STATE-DEPENDENT LEARNING IN SPIKING NEURON NETWORKS - Generalized state-dependent learning framework in artificial neuron networks may be implemented. A framework may be used to describe plasticity updates of neuron connections based on connection state term and neuron state term. The state connections within the network may be updated based on inputs and outputs to/from neurons. The input connections of a neuron may be updated using connection traces comprising a time-history of inputs provided via the connections. Weights of the connections may be updated and connection state may be time varying. The updated weights may be determined using a rate of change of the trace and a term comprising a product of a per-neuron contribution and a per-connection contribution configured to account for the state time-dependency. Using event-dependent connection change components, connection updates may be executed on per neuron basis, as opposed to per-connection basis. | 01-30-2014 |
20140032460 | SPIKE DOMAIN NEURON CIRCUIT WITH PROGRAMMABLE KINETIC DYNAMIC, HOMEOSTATIC PLASTICITY AND AXONAL DELAYS - A spike domain asynchronous neuron circuit includes a first spike to exponential circuit for emulating kinetic dynamics at a neuron input and converting voltage spikes into exponentials, a first adjustable gain circuit for emulating homeostatic plasticity coupled to the first voltage-type spike exponential output and having a first current output, a neuron core circuit coupled to the first current output for emulating a neuron core and having a spike encoded voltage output, a filter and comparator circuit coupled to the spike encoded voltage output and having a gain control output coupled to the first adjustable gain circuit for controlling a gain of the first adjustable gain circuit, and an adjustable delay circuit for emulating an axonal delay coupled to the spike encoded voltage output and having an axonal delay output. | 01-30-2014 |
20140046882 | PACKET DATA NEURAL NETWORK SYSTEM AND METHOD - This application discloses a neural network that also functions as a connection oriented packet data network using an MPLS-type label switching technology. The neural network uses its intelligence to build and manage label switched paths (LSPs) to transport user packets and solve complex mathematical problems. However, the methods taught here can be applied to other data networks including ad-hoc, mobile, and traditional packet networks, cell or frame-switched networks, time-slot networks and the like. | 02-13-2014 |
20140129495 | METHODS AND APPARATUS FOR TRANSDUCING A SIGNAL INTO A NEURONAL SPIKING REPRESENTATION - Certain aspects of the present disclosure provide methods and apparatus for transducing a signal into a neuronal spiking representation using at least two distinct populations of spiking neuron models. One example method generally includes receiving a signal; filtering the signal into a plurality of channels using a plurality of filters having different frequency passbands; sending the filtered signal in each of the channels to a first type of spiking neuron model; and sending the filtered signal in each of the channels to a second type of spiking neuron model, wherein the second type differs from the first type of spiking neuron model in at least one parameter. | 05-08-2014 |
20140129496 | METHODS AND APPARATUS FOR IDENTIFYING SPECTRAL PEAKS IN A NEURONAL SPIKING REPRESENTATION OF A SIGNAL - Certain aspects of the present disclosure provide methods and apparatus for identifying spectral peaks in a neuronal spiking representation of a signal, such as an auditory signal. One example method generally includes receiving a signal; filtering the signal into a plurality of channels using a plurality of filters having different frequency passbands; sending the filtered signal in each of the channels to a first type of spiking neuron model; sending the filtered signal in each of the channels to a second type of spiking neuron model; and identifying one or more spectral peaks in the signal based on a first output of the first type of spiking neuron model and a second output of the second type of spiking neuron model for each of the channels. | 05-08-2014 |
20140156575 | Method and Apparatus of Processing Data Using Deep Belief Networks Employing Low-Rank Matrix Factorization - Deep belief networks are usually associated with a large number of parameters and high computational complexity. The large number of parameters results in a long and computationally consuming training phase. According to at least one example embodiment, low-rank matrix factorization is used to approximate at least a first set of parameters, associated with an output layer, with a second and a third set of parameters. The total number of parameters in the second and third sets of parameters is smaller than the number of sets of parameters in the first set. An architecture of a resulting artificial neural network, when employing low-rank matrix factorization, may be characterized with a low-rank layer, not employing activation function(s), and defined by a relatively small number of nodes and the second set of parameters. By using low rank matrix factorization, training is faster, leading to rapid deployment of the respective system. | 06-05-2014 |
20140164299 | HYBRID PRE-TRAINING OF DEEP BELIEF NETWORKS - Pretraining for a DBN initializes weights of the DBN (Deep Belief Network) using a hybrid pre-training methodology. Hybrid pre-training employs generative component that allows the hybrid PT method to have better performance in WER (Word Error Rate) compared to the discriminative PT method. Hybrid pre-training learns weights which are more closely linked to the final objective function, allowing for a much larger batch size compared to generative PT, which allows for improvements in speed; and a larger batch size allows for parallelization of the gradient computation, speeding up training further. | 06-12-2014 |
20140180985 | MAPPING NEURAL DYNAMICS OF A NEURAL MODEL ON TO A COARSELY GRAINED LOOK-UP TABLE - Embodiments of the invention relate to mapping neural dynamics of a neural model on to a lookup table. One embodiment comprises defining a phase plane for a neural model. The phase plane represents neural dynamics of the neural model. The phase plane is coarsely sampled to obtain state transition information for multiple neuronal states. The state transition information is mapped on to a lookup table. | 06-26-2014 |
20140201115 | DETERMINING SOFTWARE OBJECT RELATIONS USING NEURAL NETWORKS - A system receives runtime information from a plurality of software objects. The software objects include an executable, a modularization unit, and a data dictionary. The system executes a training phase in a software neural network using the runtime information. The software neural network generates a pattern among the executables, modularization units, and data dictionaries using the software neural network such that a particular executable is pattern-matched with one or more modularization units and one or more data dictionaries. | 07-17-2014 |
20140244557 | APPARATUS AND METHODS FOR RATE-MODULATED PLASTICITY IN A SPIKING NEURON NETWORK - Apparatus and methods for activity based plasticity in a spiking neuron network adapted to process sensory input. In one approach, the plasticity mechanism of a connection may comprise a causal potentiation portion and an anti-causal portion. The anti-causal portion, corresponding to the input into a neuron occurring after the neuron response, may be configured based on the prior activity of the neuron. When the neuron is in low activity state, the connection, when active, may be potentiated by a base amount. When the neuron activity increases due to another input, the efficacy of the connection, if active, may be reduced proportionally to the neuron activity. Such functionality may enable the network to maintain strong, albeit inactive, connections available for use for extended intervals. | 08-28-2014 |
20140279771 | Novel Quadratic Regularization For Neural Network With Skip-Layer Connections - According to one aspect of the invention, target data comprising observations is received. A neural network comprising input neurons, output neurons, hidden neurons, skip-layer connections, and non-skip-layer connections is used to analyze the target data based on an overall objective function that comprises a linear regression part, the neural network's unregularized objective function, and a regularization term. An overall optimized first vector value of a first vector and an overall optimized second vector value of a second vector are determined based on the target data and the overall objective function. The first vector comprises skip-layer weights for the skip-layer connections and output neuron biases, whereas the second vector comprises non-skip-layer weights for the non-skip-layer connections. | 09-18-2014 |
20140279772 | NEURONAL NETWORKS FOR CONTROLLING DOWNHOLE PROCESSES - An apparatus for processing signals downhole includes a carrier configured to be conveyed through a borehole penetrating an earth formation and a container disposed at the carrier and configured to carry biological material. A cultured biological neural network is disposed at the container, the neural network being capable of processing a network input signal and providing a processed network output signal. One or more electrodes are in electrical communication with the neural network, the one or more electrodes being configured to communicate the network input signal into the neural network and to communicate the network output signal out of the neural network. | 09-18-2014 |
20140304202 | CONNECTION INVITATION ORDERING - Disclosed in some examples are methods, systems, and machine-readable mediums which provide a relevance engine for determining a relevance of an individual (either a non-member or another member) to another individual (either a non-member or another member). This relevance engine may use signals in the form of data that the social networking service may learn about the individuals to determine how relevant the individuals are to each other. Example applications may include ordering of connection invitations in a social networking service. | 10-09-2014 |
20140310218 | High-Order Semi-RBMs and Deep Gated Neural Networks for Feature Interaction Identification and Non-Linear Semantic Indexing - Systems and method are disclosed for determining complex interactions among system inputs by using semi-Restricted Boltzmann Machines (RBMs) with factorized gated interactions of different orders to model complex interactions among system inputs; applying semi-RBMs to train a deep neural network with high-order within-layer interactions for learning a distance metric and a feature mapping; and tuning the deep neural network by minimizing margin violations between positive query document pairs and corresponding negative pairs. | 10-16-2014 |
20140324746 | TRAINING SEQUENCE - A computing system can include a memory controller and a first storage device. The first storage device is to receive a serially encoded request and forward the serially encoded request to a second storage device before deserializing the serially encoded request. The first storage device is also to return a training sequence from the target storage device to the memory controller. The first storage device is additionally to return a response from the target storage device to the memory controller. | 10-30-2014 |
20140365414 | SPIKING NETWORK APPARATUS AND METHOD WITH BIMODAL SPIKE-TIMING DEPENDENT PLASTICITY - Apparatus and methods for learning in response to temporally-proximate features. In one implementation, an image processing apparatus utilizes bi-modal spike timing dependent plasticity in a spiking neuron network. Based on a response by the neuron to a frame of input, the bi-modal plasticity mechanism is used to depress synaptic connections delivering the present input frame and to potentiate synaptic connections delivering previous and/or subsequent frames of input. The depression of near-contemporaneous input prevents the creation of a positive feedback loop and provides a mechanism for network response normalization. | 12-11-2014 |
20150052092 | METHODS AND SYSTEMS OF BRAIN-LIKE COMPUTING VIRTUALIZATION - The invention discloses the technology of brain-like computing virtualization. Brain-like computing means the computing technology to mimic human brain and generate human intelligence automatically with computer software. Here the unconscious engine and conscious engine are used to define human left and right brain, while the virtualization technology is used for software to run on future hardware, such as quantum computer and molecular computer. The applied domain areas include quantum gate and adiabatic quantum simulation, brain-like autonomic computing, traditional multi-core-cluster performance service, software development/service delivery systems, and mission-critical business continuity/disaster recovery. | 02-19-2015 |
20150127594 | TRANSFER LEARNING FOR DEEP NEURAL NETWORK BASED HOTWORD DETECTION - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a deep neural network. One of the methods includes training a deep neural network with a first training set by adjusting values for each of a plurality of weights included in the neural network, and training the deep neural network to determine a probability that data received by the deep neural network has features similar to key features of one or more keywords or key phrases, the training comprising providing the deep neural network with a second training set and adjusting the values for a first subset of the plurality of weights, wherein the second training set includes data representing the key features of the one or more keywords or key phrases. | 05-07-2015 |
20150309961 | ARITHMETIC PROCESSING APPARATUS - An arithmetic processing apparatus performs arithmetic by a neural network in which multiple processing layers are hierarchically connected. The arithmetic processing apparatus corresponding to one of the multiple processing layers includes a convolution arithmetic portion and a pooling processing portion. The convolution arithmetic portion receives an input data from another of the plurality of processing layers, performs convolution arithmetic to the input data, and in each arithmetic cycle, outputs a part of all convolution arithmetic result data required for single pooling processing. The pooling processing portion performs the single pooling processing to the all convolution arithmetic result data before executing activation processing. | 10-29-2015 |
20150331832 | ARITHMETIC PROCESSING APPARATUS - An arithmetic processing apparatus executing arithmetic by a neural network in which multiple processing layers are hierarchically connected is provided. The arithmetic processing apparatus includes multiple arithmetic blocks corresponding to one of the multiple processing layers. Each of the arithmetic blocks includes a convolution arithmetic portion, an activation portion, a pooling portion, and a normalization portion. The convolution arithmetic portion executes convolution arithmetic processing. The normalization portion executes normalization to a processing result data generated by the pooling portion. The normalization portion includes a first output portion, a second output portion, and a normalization execution portion. The first output portion outputs a first data. The second output portion outputs an addition data as a second data. The normalization execution portion executes normalization to the first data based on the second data. | 11-19-2015 |
20150339570 | METHODS AND SYSTEMS FOR NEURAL AND COGNITIVE PROCESSING - Provided herein is a system for creating, modifying, deploying and running intelligent systems by combining and customizing the function and operation of reusable component modules arranged into neural processing graphs which direct the flow of signals among the modules, analogous in part to biological brain structure and operation as compositions of variations on functional components and subassemblies. | 11-26-2015 |
20150347895 | DERIVING RELATIONSHIPS FROM OVERLAPPING LOCATION DATA - Method and systems for deriving relationships from overlapping time and location data are disclosed. A first user device receives time and location data for a first user, the time and location data for the first user representing locations of the first user over time, reduces the time and location data for the first user around a first plurality of artificial neurons, wherein each of the first plurality of artificial neurons represents a location of the first user during a first time, transmits the reduced time and location data for the first user to a server, wherein the server determines whether or not the first user and a second user are related based on determining that the first user and the second user have an artificial neuron in common among the first plurality of artificial neurons and a second plurality of artificial neurons. | 12-03-2015 |
20150356453 | CONNECTION INVITATION ORDERING - Disclosed in some examples are methods, systems, and machine-readable mediums which provide a relevance engine for determining a relevance of an individual (either a non-member or another member) to another individual (either a non-member or another member). This relevance engine may use signals in the form of data that the social networking service may learn about the individuals to determine how relevant the individuals are to each other. Example applications may include ordering of connection invitations in a social networking service. | 12-10-2015 |
20150363691 | MANAGING SOFTWARE BUNDLING USING AN ARTIFICIAL NEURAL NETWORK - An artificial neural network is used to manage software bundling. During a training phase, the artificial neural network is trained using previously bundled software components having known values for identification attributes and known software bundle asociations. Once trained, the artifical neural network can be used to identify the proper software bundles for newly discovered sofware components. In this process, a newly discovered software component having known values for the identification attributes is identified. An input vector is derived from the known values. The input vector is loaded into input neurons of the artificial neural network. A yielded output vector is then obtained from an output neuron of the artificial neural network. Based on the composition of the output vector, the software bundle associated with this newly discovered software component is determined. | 12-17-2015 |
20150371133 | PROVIDING DEVICE, PROVIDING METHOD, AND RECORDING MEDIUM - A providing device according to an embodiment includes a registration unit that registers a learning device, in which nodes that output results of calculations on input data are connected and which extracts a feature corresponding to a predetermined type from the input data, an accepting unit that accepts designation of a type of a feature, a providing unit that selects a learning device that extracts a feature corresponding to the type of the feature accepted by the accepting unit among the learning device registered by the registration unit, and provides a new learning device generated based on the selected learning device, and a calculation unit that calculates a price to be paid to a seller provided the learning device selected by the providing unit. | 12-24-2015 |
20160055410 | NEURAL NETWORKING SYSTEM AND METHODS - A method/apparatus/system for generating a request for improvement of a data object in a neural network is described herein. The neural network contains a plurality of data objects each made of an aggregation of content. The data objects of the neural network are interconnected based on one or several skill levels embodied in the content of the data objects via a plurality of connecting vectors. These connecting vectors can be generated and/or modified based on data collected from the iterative transversal of the connecting vectors by one or several users of the neural network. | 02-25-2016 |
20160071006 | Computer-assisted Analysis of a Data Record from Observations - Computer-assisted analysis of a data record from observations is provided. The data record contains, for each observation, a data vector that includes values of input variables and a value of a target variable. A neuron network structure is learned from differently initialized neuron networks based on the data record. The neuron networks respectively include an input layer, one or more hidden layers, and an output layer. The input layer includes at least a portion of the input variables, and the output layer includes the target variable. The neuron network structure outputs the mean value of the target variables of the output layers of the neuron networks. Sensitivity values are determined by the neuron network structure and stored. Each sensitivity value is assigned an observation and an input variable. The sensitivity value includes the derivative of the target variable of the assigned observation with respect to the assigned input variable. | 03-10-2016 |
20160092607 | METHOD FOR ALLOY DESIGN COMBINING RESPONSE SURFACE METHOD AND ARTIFICIAL NEURAL NETWORK - There is provided a method for alloy design combining a response surface method and an artificial neural network that can significantly reduce the number of times, the time, and the cost for experiments by designing the minimum experiments using a response surface method, obtaining results through actual experiments, and modeling the obtained results using an artificial neural network. | 03-31-2016 |
20160155049 | METHOD AND APPARATUS FOR EXTENDING NEURAL NETWORK | 06-02-2016 |
20190147322 | METHOD AND APPARATUS FOR QUANTIZING ARTIFICIAL NEURAL NETWORK | 05-16-2019 |
20190149425 | METHOD AND SYSTEM FOR VIRTUAL NETWORK EMULATION AND SELF-ORGANIZING NETWORK CONTROL USING DEEP GENERATIVE MODELS | 05-16-2019 |
20220138531 | GENERATING OUTPUT SEQUENCES FROM INPUT SEQUENCES USING NEURAL NETWORKS - Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating output sequences from input sequences. One of the methods includes obtaining an input sequence having a first number of inputs arranged according to an input order; processing each input in the input sequence using an encoder recurrent neural network to generate a respective encoder hidden state for each input in the input sequence; and generating an output sequence having a second number of outputs arranged according to an output order, each output in the output sequence being selected from the inputs in the input sequence, comprising, for each position in the output order: generating a softmax output for the position using the encoder hidden states that is a pointer into the input sequence; and selecting an input from the input sequence as the output at the position using the softmax output. | 05-05-2022 |
20220138553 | TEXTURE UNIT CIRCUIT IN NEURAL NETWORK PROCESSOR - Embodiments of the present disclosure relate to a texture unit circuit in a neural processor circuit. The neural processor circuit includes a tensor access operation circuit with the texture unit circuit, a data processor circuit, and at least one neural engine circuit. The texture unit circuit fetches a source tensor from a system memory by referencing an index tensor in the system memory representing indexing information into the source tensor. The data processor circuit stores an output version of the source tensor obtained from the tensor access operation circuit and sends the output version of the source tensor as multiple of units of input data to the at least one neural engine circuit. The at least one neural engine circuit performs at least convolution operations on the units of input data and at least one kernel to generate output data. | 05-05-2022 |