Class / Patent application number | Description | Number of patent applications / Date published |
706027000 | Architecture | 87 |
20080256009 | System for temporal prediction - Described is a system for temporal prediction. The system includes an extraction module, a mapping module, and a prediction module. The extraction module is configured to receive X(1), . . . X(n) historical samples of a time series and utilize a genetic algorithm to extract deterministic features in the time series. The mapping module is configured to receive the deterministic features and utilize a learning algorithm to map the deterministic features to a predicted {circumflex over (x)}(n+1) sample of the time series. Finally, the prediction module is configured to utilize a cascaded computing structure having k levels of prediction to generate a predicted {circumflex over (x)}(n+k) sample. The predicted {circumflex over (x)}(n+k) sample is a final temporal prediction for k future samples. | 10-16-2008 |
20090192958 | PARALLEL PROCESSING DEVICE AND PARALLEL PROCESSING METHOD - A parallel processing device that computes a hierarchical neural network, the parallel processing device includes: a plurality of units that are identified by characteristic unit numbers that are predetermined identification numbers, respectively; a distribution control section that, in response to input as an input value of an output value outputted from one of the plurality of units through a unit output bus, outputs control data including the input value inputted and a selection unit number that is an identification number to select one unit among the plurality of units to the plurality of units through the unit input bus; and a common storage section that stores in advance coupling weights in a plurality of layers of the hierarchical neural network, the coupling weights being shared by plural ones of the plurality of units. Each of the units includes: a data input section that receives control data as an input from the distribution control section through the unit input bus; a unit number match judgment section that judges as to whether a selection unit number included in the control data inputted in the data input section matches the characteristic unit number; a unit processing section that, based on an input value included in the control data inputted in the data input section, computes by a computing method predetermined for each of the units; and a data output section that, when the unit number match judgment section provides a judgment result indicating matching, outputs a computation result computed by the unit processing section as the output value to the distribution control section through the unit output bus, wherein, based on the coupling weights stored in the common weight storage section, the unit processing section executes computation in a forward direction that is a direction from an input layer to an output layer in the hierarchical neural network, and executes computation in a backward direction that is a direction from the output layer to the input layer, thereby updating the coupling weights. | 07-30-2009 |
20090240642 | NEURAL MODELING AND BRAIN-BASED DEVICES USING SPECIAL PURPOSE PROCESSOR - A special purpose processor (SPP) can use a Field Programmable Gate Array (FPGA) or similar programmable device to model a large number of neural elements. The FPGAs can have multiple cores doing presynaptic, postsynaptic, and plasticity calculations in parallel. Each core can implement multiple neural elements of the neural model. | 09-24-2009 |
20100076916 | Autonomous Learning Dynamic Artificial Neural Computing Device and Brain Inspired System - A hierarchical information processing system is disclosed having a plurality of artificial neurons, comprised of binary logic gates, and interconnected through a second plurality of dynamic artificial synapses, intended to simulate or extend the function of a biological nervous system. The system is capable of approximation, autonomous learning and strengthening of formerly learned input patterns. The system learns by simulated Synaptic Time Dependent Plasticity, commonly abbreviated to STDP. Each artificial neuron consisting of a soma circuit and a plurality of synapse circuits, whereby the soma membrane potential, the soma threshold value, the synapse strength and the Post Synaptic Potential at each synapse are expressed as values in binary registers, which are dynamically determined from certain aspects of input pulse timing, previous strength value and output pulse feedback. | 03-25-2010 |
20100223219 | CALCULATION PROCESSING APPARATUS AND METHOD - A calculation processing apparatus for executing network calculations defined by hierarchically connecting a plurality of logical processing nodes that apply calculation processing to input data, sequentially designates a processing node which is to execute calculation processing based on sequence information that specifies an execution order of calculations of predetermined processing units to be executed by the plurality of processing nodes, so as to implement the network calculations, and executes the calculation processing of the designated processing node in the processing unit to obtain a calculation result. The calculation apparatus allocates partial areas of a memory to the plurality of processing nodes as ring buffers, and writes the calculation result in the memory while circulating a write destination of data to have a memory area corresponding to the amount of the calculation result of the processing unit as a unit. | 09-02-2010 |
20100241601 | Apparatus comprising artificial neuronal assembly - An artificial synapse array and virtual neural space are disclosed. | 09-23-2010 |
20100306145 | METHOD AND DEVICE FOR REALIZING AN ASSOCIATIVE MEMORY BASED ON INHIBITORY NEURAL NETWORKS - A method for forming an associative computer memory comprises the step of forming an inhibitory memory matrix A′=−(Ap−A). According to the Wilshaw model constructed from a given set of address patterns and content patterns' and random matrix structure. | 12-02-2010 |
20110016071 | Method for efficiently simulating the information processing in cells and tissues of the nervous system with a temporal series compressed encoding neural network - A neural network simulation represents components of neurons by finite state machines, called sectors, implemented using look-up tables. Each sector has an internal state represented by a compressed history of data input to the sector and is factorized into distinct historical time intervals of the data input. The compressed history of data input to the sector may be computed by compressing the data input to the sector during a time interval, storing the compressed history of data input to the sector in memory, and computing from the stored compressed history of data input to the sector the data output from the sector. | 01-20-2011 |
20110145181 | 100GBPS SECURITY AND SEARCH ARCHITECTURE USING PROGRAMMABLE INTELLIGENT SEARCH MEMORY (PRISM) THAT COMPRISES ONE OR MORE BIT INTERVAL COUNTERS - Memory architecture provides capabilities for high performance content search. The architecture creates an innovative memory that can be programmed with content search rules which are used by the memory to evaluate presented content for matching with the programmed rules. When the content being searched matches any of the rules programmed in the Programmable Intelligent Search Memory (PRISM) action(s) associated with the matched rule(s) are taken. Content search rules comprise of regular expressions which are converted to finite state automata (FSA) and then programmed in PRISM for evaluating content with the search rules. PRISM architecture comprises of a plurality of programmable PRISM Memory clusters (PMC) which comprise of a plurality of programmable PRISM Search Engines (PSE). Groups of PMCs can be programmed with the same rules and used in parallel to apply these rules to multiple data streams simultaneously to achieve increased performance. PMC groups provide 10 Gbps performance with 10 PMC groups enabling 100 Gbps content search and security performance. | 06-16-2011 |
20110213743 | APPARATUS FOR REALIZING THREE-DIMENSIONAL NEURAL NETWORK - An apparatus for realizing a three-dimensional (3D) neural network includes a culture substrate ( | 09-01-2011 |
20110302119 | SELF-ORGANIZING CIRCUITS - A self-organizing electronic system and method that organizes and repairs itself. A number circuit of modules can be embedded in a fabric. Each circuit module can calculate some function of its inputs and produces an output, which is encoded on volatile memory held together by a plasticity rule. The plasticity rule allows circuit modules to converge to any possible functional state defined by the structure of the information being processed. Flow through the system gates energy dissipation of the individual circuit modules. Circuit modules receiving high flow become locked in their functional states while circuit modules receiving little or no flow mutate in search of better configurations. These principles can be utilized to configure the state of any functional element within the system, and can be abstracted to higher levels of organization. Far from expending energy on state configurations, a volatile system only expends energy stabilizing successful configurations. Continuous stabilization coupled with redundancy results in a circuit capable of healing itself from the bottom-up. | 12-08-2011 |
20120016829 | Memristive Adaptive Resonance Networks - A method for implementing an artificial neural network includes connecting a plurality of receiving neurons to a plurality of transmitting neurons through memristive synapses ( | 01-19-2012 |
20120036099 | METHODS AND SYSTEMS FOR REWARD-MODULATED SPIKE-TIMING-DEPENDENT-PLASTICITY - Certain embodiments of the present disclosure support techniques for simplified hardware implementation of the reward-modulated spike-timing-dependent plasticity (STDP) learning rule in networks of spiking neurons. | 02-09-2012 |
20120084240 | PHASE CHANGE MEMORY SYNAPTRONIC CIRCUIT FOR SPIKING COMPUTATION, ASSOCIATION AND RECALL - Embodiments of the invention are directed to producing spike-timing dependent plasticity using electronic neurons for computation, and pattern matching tasks such as association and recall. In response to an electronic neuron spiking, a spiking signal is sent from the electronic neuron to each axon driver and each dendrite driver connected to the spiking electronic neuron. Each axon driver receiving the spiking signal sends an axonal signal from the axon driver to a variable state resistor. Each dendrite driver receiving the spiking signal sends a dendritic signal from the dendrite driver to the variable state resistor, wherein the variable state resistor couples the axon driver and the dendrite driver. The combination of the axonal and dendritic signals is capable of increasing or decreasing conductance of the variable state resistor. | 04-05-2012 |
20120084241 | PRODUCING SPIKE-TIMING DEPENDENT PLASTICITY IN A NEUROMORPHIC NETWORK UTILIZING PHASE CHANGE SYNAPTIC DEVICES - Embodiments of the invention relate to a neuromorphic network for producing spike-timing dependent plasticity. The neuromorphic network includes a plurality of electronic neurons and an interconnect circuit coupled for interconnecting the plurality of electronic neurons. The interconnect circuit includes plural synaptic devices for interconnecting the electronic neurons via axon paths, dendrite paths and membrane paths. Each synaptic device includes a variable state resistor and a transistor device with a gate terminal, a source terminal and a drain terminal, wherein the drain terminal is connected in series with a first terminal of the variable state resistor. The source terminal of the transistor device is connected to an axon path, the gate terminal of the transistor device is connected to a membrane path and a second terminal of the variable state resistor is connected to a dendrite path, such that each synaptic device is coupled between a first axon path and a first dendrite path, and between a first membrane path and said first dendrite path. | 04-05-2012 |
20120117012 | Spike-timing computer modeling of working memory - Working memory (WM) is part of the brain's memory system that provides temporary storage and manipulation of information necessary for cognition. Although WM has limited capacity at any given time, it has vast memory content in the sense that it acts on the brain's nearly infinite repertoire of lifetime memories. As described, large memory content and WM functionality emerge spontaneously if the spike-timing nature of neuronal processing is taken into account. The memories are represented by extensively overlapping groups of neurons that exhibit stereotypical time-locked spatiotemporal spike-timing patterns, called polychronous patterns. Using computer-implemented simulations, associative synaptic plasticity in the form of short-term STDP selects such polychronous neuronal groups (PNGs) into WM by temporarily strengthening the synapses of the selected PNGs. This strengthening increases the spontaneous reactivation frequency of the selected PNGs, resulting in irregular, yet systematically changing elevated firing activity patterns consistent with those recorded in vivo during WM tasks. The computer-implemented model implements the relationship between such slowly changing firing rates and precisely timed spikes, and also reveals a novel relationship between WM and the perception of time on the order of seconds. | 05-10-2012 |
20120284217 | AREA EFFICIENT NEUROMORPHIC SYSTEM - A neuromorphic system includes a plurality of synapse blocks electrically connected to a plurality of neuron circuit blocks. The plurality of synapse blocks includes a plurality of neuromorphic circuits. Each neuromorphic circuit includes a field effect transistor in a diode configuration electrically connected to variable resistance material, where the variable resistance material provides a programmable resistance value. Each neuromorphic circuit also includes a first junction electrically connected to the variable resistance material and an output of one or more of the neuron circuit blocks, and a second junction electrically connected to the field effect transistor and an input of one or more of the neuron circuit blocks. | 11-08-2012 |
20120317063 | SYNAPSE FOR FUNCTION CELL OF SPIKE TIMING DEPENDENT PLASTICITY (STDP), FUNCTION CELL OF STDP, AND NEUROMORPHIC CIRCUIT USING FUNCTION CELL OF STDP - A synapse for a spike timing dependent (STDP) function cell includes a memory device having a variable resistance, such as a memristor, and a transistor connected to the memory device. A channel of the memory device is connected in series with a channel of the transistor. | 12-13-2012 |
20120323832 | NEURAL MODELING AND BRAIN-BASED DEVICES USING SPECIAL PURPOSE PROCESSOR - A special purpose processor (SPP) can use a Field Programmable Gate Array (FPGA) or similar programmable device to model a large number of neural elements. The FPGAs can have multiple cores doing presynaptic, postsynaptic, and plasticity calculations in parallel. Each core can implement multiple neural elements of the neural model. | 12-20-2012 |
20120323833 | Organizing Neural Networks - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for organizing trained and untrained neural networks. In one aspect, a neural network device includes a collection of node assemblies interconnected by between-assembly links, each node assembly itself comprising a network of nodes interconnected by a plurality of within-assembly links, wherein each of the between-assembly links and the within-assembly links have an associated weight, each weight embodying a strength of connection between the nodes joined by the associated link, the nodes within each assembly being more likely to be connected to other nodes within that assembly than to be connected to nodes within others of the node assemblies. | 12-20-2012 |
20130013544 | MIDDLEWARE DEVICE FOR THREE-TIER UBIQUITOUS CITY SYSTEM - Disclosed is a ubiquitous city (u-city) exclusive middleware to provide services to a u-city. A middleware device performs a role corresponding to a brain of a human being by aggregating u-city information collected through wired and wireless converged and complex communication networks, analyzes the aggregated information, finds an optimal service based on reasoned current context information and a given command, and processes the found service to be executed. The u-city exclusive middleware performs various embedded functions by operating in a three-tier method through a u-city infrastructure and a u-city portal, and an operating method and executed functions of the middleware follows a method of an operating system of a typical computer system. | 01-10-2013 |
20130031040 | HIERARCHICAL ROUTING FOR TWO-WAY INFORMATION FLOW AND STRUCTURAL PLASTICITY IN NEURAL NETWORKS - Hierarchical routing for two-way information flow and structural plasticity in a neural network is provided. In one embodiment the network includes multiple core modules, wherein each core module has a plurality of incoming connections with predetermined addresses. Each core module also has a plurality of outgoing connections such that each outgoing connection targets an incoming connection in a core module among the multiple core modules. The network also has a routing system that selectively routes signals among the core modules based on a reconfigurable hierarchical organization of the core modules. The network approximates a fully connected network such that each outgoing connection on any core module can target and reach any incoming connection on any core module without requiring a fully connected network. The routing system provides two-way information flow between neurons utilizing hierarchical routing. | 01-31-2013 |
20130073497 | NEUROMORPHIC EVENT-DRIVEN NEURAL COMPUTING ARCHITECTURE IN A SCALABLE NEURAL NETWORK - An event-driven neural network includes a plurality of interconnected core circuits is provided. Each core circuit includes an electronic synapse array has multiple digital synapses interconnecting a plurality of digital electronic neurons. A synapse interconnects an axon of a pre-synaptic neuron with a dendrite of a post-synaptic neuron. A neuron integrates input spikes and generates a spike event in response to the integrated input spikes exceeding a threshold. Each core circuit also has a scheduler that receives a spike event and delivers the spike event to a selected axon in the synapse array based on a schedule for deterministic event delivery. | 03-21-2013 |
20130073498 | ELEMENTARY NETWORK DESCRIPTION FOR EFFICIENT LINK BETWEEN NEURONAL MODELS AND NEUROMORPHIC SYSTEMS - A simple format is disclosed and referred to as Elementary Network Description (END). The format can fully describe a large-scale neuronal model and embodiments of software or hardware engines to simulate such a model efficiently. The architecture of such neuromorphic engines is optimal for high-performance parallel processing of spiking networks with spike-timing dependent plasticity. The format is specifically tuned for neural systems and specialized neuromorphic hardware, thereby serving as a bridge between developers of brain models and neuromorphic hardware manufactures. | 03-21-2013 |
20130073499 | APPARATUS AND METHOD FOR PARTIAL EVALUATION OF SYNAPTIC UPDATES BASED ON SYSTEM EVENTS - Apparatus and methods for partial evaluation of synaptic updates in neural networks. In one embodiment, a pre-synaptic unit is connected to a several post synaptic units via communication channels. Information related to a plurality of post-synaptic pulses generated by the post-synaptic units is stored by the network in response to a system event. Synaptic channel updates are performed by the network using the time intervals between a pre-synaptic pulse, which is being generated prior to the system event, and at least a portion of the plurality of the post synaptic pulses. The system event enables removal of the information related to the portion of the post-synaptic pulses from the storage device. A shared memory block within the storage device is used to store data related to post-synaptic pulses generated by different post-synaptic nodes. This configuration enables memory use optimization of post-synaptic units with different firing rates. | 03-21-2013 |
20130073500 | High level neuromorphic network description apparatus and methods - Apparatus and methods for high-level neuromorphic network description (HLND) framework that may be configured to enable users to define neuromorphic network architectures using a unified and unambiguous representation that is both human-readable and machine-interpretable. The framework may be used to define nodes types, node-to-node connection types, instantiate node instances for different node types, and to generate instances of connection types between these nodes. To facilitate framework usage, the HLND format may provide the flexibility required by computational neuroscientists and, at the same time, provides a user-friendly interface for users with limited experience in modeling neurons. The HLND kernel may comprise an interface to Elementary Network Description (END) that is optimized for efficient representation of neuronal systems in hardware-independent manner and enables seamless translation of HLND model description into hardware instructions for execution by various processing modules. | 03-21-2013 |
20130151451 | NEURAL WORKING MEMORY DEVICE - A spiking neuron-based working memory device is provided. The spiking neuron-based working memory device includes an input interface configured to convert input spike signals into respective burst signals having predetermined forms, and output a sequence of the burst signals, the burst signals corresponding to the input spike signals in a burst structure, and two or more memory elements (MEs) configured to sequentially store features respectively corresponding to the outputted sequence of the burst signals, each of the MEs continuously outputting spike signals respectively corresponding to the stored features. | 06-13-2013 |
20130159232 | REGULATING ACTIVATION THRESHOLD LEVELS IN A SIMULATED NEURAL CIRCUIT - A simulated neural element includes a cell body and one or more simulated branches. Simulated branches are configured to receive input signals and to activate when a combination of the signals received during a specified window of time exceeds a branch activation threshold level. The simulated cell body is configured to activate when a combination of activity in the simulated branches during another specified window of time exceeds a cell body activation threshold level. The branch and cell body activation threshold levels may be automatically and locally regulated so that the actual branch activation rates for the simulated branches approximate desired branch activation rates and the actual cell body activation rate for the simulated cell body approximates a desired cell body activation rate. Such “homeostatic” regulation of branch and cell firing thresholds, done locally (i.e. individually for each branch and cell), may enhance the performance of artificial neural circuitry. | 06-20-2013 |
20130173515 | ELECTRONIC SYNAPSES FROM STOCHASTIC BINARY MEMORY DEVICES - According to a technique, an electronic device is configured to correspond to characteristic features of a biological synapse. The electronic device includes multiple bipolar resistors arranged in parallel to form an electronic synapse, an axonal connection connected to one end of the electronic synapse and to a first electronic neuron, and a dendritic connection connected to another end of the electronic synapse and to a second electronic neuron. An increase and decrease of synaptic conduction in the electronic synapse is based on a probability of switching the plurality of bipolar resistors between a low resistance state and a high resistance state. | 07-04-2013 |
20130262358 | ELECTRONIC CIRCUIT WITH NEUROMORPHIC ARCHITECTURE - Neuromorphic circuits are multi-cell networks configured to imitate the behavior of biological neural networks. A neuromorphic circuit is provided which comprises a network of neurons each identified by a neuron address in the network, each neuron being able to receive and process at least one input signal and then later emit on an output of the neuron a signal representing an event which occurs inside the neuron, and a programmable memory composed of elementary memories each associated with a respective neuron. The elementary memory, which is a memory of post-synaptic addresses and weights, comprises an activation input linked by a conductor to the output of the associated neuron to directly receive an event signal emitted by this neuron without passing through an address encoder or decoder. The post-synaptic addresses extracted from an elementary memory activated by a neuron are applied, with associated synaptic weights, as inputs to the neural network. | 10-03-2013 |
20130297542 | SENSORY INPUT PROCESSING APPARATUS IN A SPIKING NEURAL NETWORK - Apparatus and methods for feedback in a spiking neural network. In one approach, spiking neurons receive sensory stimulus and context signal that correspond to the same context. When the stimulus provides sufficient excitation, neurons generate response. Context connections are adjusted according to inverse spike-timing dependent plasticity. When the context signal precedes the post synaptic spike, context synaptic connections are depressed. Conversely, whenever the context signal follows the post synaptic spike, the connections are potentiated. The inverse STDP connection adjustment ensures precise control of feedback-induced firing, eliminates runaway positive feedback loops, enables self-stabilizing network operation. In another aspect of the invention, the connection adjustment methodology facilitates robust context switching when processing visual information. When a context (such an object) becomes intermittently absent, prior context connection potentiation enables firing for a period of time. If the object remains absent, the connection becomes depressed thereby preventing further firing. | 11-07-2013 |
20140032465 | SYNAPTIC, DENDRITIC, SOMATIC, AND AXONAL PLASTICITY IN A NETWORK OF NEURAL CORES USING A PLASTIC MULTI-STAGE CROSSBAR SWITCHING - Embodiments of the invention provide a neural network comprising multiple functional neural core circuits, and a dynamically reconfigurable switch interconnect between the functional neural core circuits. The interconnect comprises multiple connectivity neural core circuits. Each functional neural core circuit comprises a first and a second core module. Each core module comprises a plurality of electronic neurons, a plurality of incoming electronic axons, and multiple electronic synapses interconnecting the incoming axons to the neurons. Each neuron has a corresponding outgoing electronic axon. In one embodiment, zero or more sets of connectivity neural core circuits interconnect outgoing axons in a functional neural core circuit to incoming axons in the same functional neural core circuit. In another embodiment, zero or more sets of connectivity neural core circuits interconnect outgoing and incoming axons in a functional neural core circuit to incoming and outgoing axons in a different functional neural core circuit, respectively. | 01-30-2014 |
20140067740 | COMPUTER-IMPLEMENTED SIMULATED INTELLIGENCE CAPABILITIES BY NEUROANATOMICALLY-BASED SYSTEM ARCHITECTURE - Computer-implemented systems for simulated intelligence information processing comprising: a digital processing device comprising an operating system configured to perform executable instructions and a memory; a computer program including instructions executable by the digital processing device to create a hierarchical software architecture comprising: a software module for providing a functional interpretation of the prosencephalon, or parts thereof; a software module for providing a functional interpretation of the mesencephalon, or parts thereof; and a software module for providing a functional interpretation of the rhombencephalon, or parts thereof; wherein the software architecture simulates vertebrate, mammalian, primate, or human neuroanatomy. In some embodiments, the systems create simulated intelligence. | 03-06-2014 |
20140067741 | HYBRID INTERCONNECT STRATEGY FOR LARGE-SCALE NEURAL NETWORK SYSTEMS - A plurality of chips arranged in a certain layout so as to face free space, and one or more optical elements are included. In the case where signal traffic for electrical communication with a given chip exceeds or is expected to exceed a certain threshold, a plurality of chips involved in communication routing of the excess signal traffic are identified, part of related signal traffic that crosses the plurality of identified chips is converted from an electric signal into an optical signal to re-route the excess signal traffic, and paths of the related signal traffic are dynamically adapted from fixed wired paths between the plurality of chips to optical communication paths formed in the free space. | 03-06-2014 |
20140122402 | NETWORK OF ARTIFICIAL NEURONS BASED ON COMPLEMENTARY MEMRISTIVE DEVICES - A neural network comprises a plurality of artificial neurons and a plurality of artificial synapses each input neuron being connected to each output neuron by way of an artificial synapse, the network being characterized in that each synapse consists of a first memristive device connected to a first input of an output neuron and of a second memristive device, mounted in opposition to said first device and connected to a second, complemented, input of said output neuron so that said output neuron integrates the difference between the currents originating from the first and second devices. | 05-01-2014 |
20140180988 | HARDWARE ARCHITECTURE FOR SIMULATING A NEURAL NETWORK OF NEURONS - Embodiments of the invention relate to a neural network system for simulating neurons of a neural model. One embodiment comprises a memory device that maintains neuronal states for multiple neurons, a lookup table that maintains state transition information for multiple neuronal states, and a controller unit that manages the memory device. The controller unit updates a neuronal state for each neuron based on incoming spike events targeting said neuron and state transition information corresponding to said neuronal state. | 06-26-2014 |
20140207719 | STRUCTURAL PLASTICITY IN SPIKING NEURAL NETWORKS WITH SYMMETRIC DUAL OF AN ELECTRONIC NEURON - A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron. | 07-24-2014 |
20140214738 | NEURISTOR-BASED RESERVOIR COMPUTING DEVICES - A neuristor-based reservoir computing device includes support circuitry formed in a complimentary metal oxide semiconductor (CMOS) layer, input nodes connected to the support circuitry and output nodes connected to the support circuitry. Thin film neuristor nodes are disposed over the CMOS layer with a first portion of the neuristor nodes connected to the input nodes and a second portion of the neuristor nodes connected to the output nodes. Interconnections between the neuristor nodes form a reservoir accepting input signals from the input nodes and outputting signals on the output nodes. A method for forming a neuristor-based reservoir computing device is also provided. | 07-31-2014 |
20140317035 | APPARATUS AND METHODS FOR EVENT-BASED COMMUNICATION IN A SPIKING NEURON NETWORKS - Apparatus and methods for event based communication in a spiking neuron network. The network may comprise units communicating by spikes via synapses. The spikes may communicate a payload data. The data may comprise one or more bits. The payload may be stored in a buffer of a pre-synaptic unit and be configured to accessed by the post-synaptic unit. Spikes of different payload may cause different actions by the recipient unit. Sensory input spikes may cause postsynaptic response and trigger connection efficacy update. Teaching input spikes trigger the efficacy update without causing the post-synaptic response. | 10-23-2014 |
20140372355 | APPARATUS AND METHOD FOR PARTIAL EVALUATION OF SYNAPTIC UPDATES BASED ON SYSTEM EVENTS - Apparatus and methods for partial evaluation of synaptic updates in neural networks. In one embodiment, a pre-synaptic unit is connected to a several post synaptic units via communication channels. Information related to a plurality of post-synaptic pulses generated by the post-synaptic units is stored by the network in response to a system event. Synaptic channel updates are performed by the network using the time intervals between a pre-synaptic pulse, which is being generated prior to the system event, and at least a portion of the plurality of the post synaptic pulses. The system event enables removal of the information related to the portion of the post-synaptic pulses from the storage device. A shared memory block within the storage device is used to store data related to post-synaptic pulses generated by different post-synaptic nodes. This configuration enables memory use optimization of post-synaptic units with different firing rates. | 12-18-2014 |
20150039546 | DUAL DETERMINISTIC AND STOCHASTIC NEUROSYNAPTIC CORE CIRCUIT - One embodiment provides a system comprising a memory device for maintaining deterministic neural data relating to a digital neuron and a logic circuit for deterministic neural computation and stochastic neural computation. Deterministic neural computation comprises processing a neuronal state of the neuron based on the deterministic neural data maintained. Stochastic neural computation comprises generating stochastic neural data relating to the neuron and processing the neuronal state of the neuron based on the stochastic neural data generated. | 02-05-2015 |
20150058268 | HIERARCHICAL SCALABLE NEUROMORPHIC SYNAPTRONIC SYSTEM FOR SYNAPTIC AND STRUCTURAL PLASTICITY - In one embodiment, the present invention provides a neural network circuit comprising multiple symmetric core circuits. Each symmetric core circuit comprises a first core module and a second core module. Each core module comprises a plurality of electronic neurons, a plurality of electronic axons, and an interconnection network comprising multiple electronic synapses interconnecting the axons to the neurons. Each synapse interconnects an axon to a neuron. The first core module and the second core module are logically overlayed on one another such that neurons in the first core module are proximal to axons in the second core module, and axons in the first core module are proximal to neurons in the second core module. Each neuron in each core module receives axonal firing events via interconnected axons and generates a neuronal firing event according to a neuronal activation function. | 02-26-2015 |
20150112910 | HARDWARE ENHANCEMENTS TO RADIAL BASIS FUNCTION WITH RESTRICTED COULOMB ENERGY LEARNING AND/OR K-NEAREST NEIGHBOR BASED NEURAL NETWORK CLASSIFIERS - This disclosure describes embodiments for a hardware based neural network integrated circuit classifier incorporating natively implemented Radial Basis functions, Restricted Coulomb Energy function, and/or kNN to make it more practical for handling a broader group of parallel algorithms | 04-23-2015 |
20150302294 | MULTI-SCALE SPATIO-TEMPORAL NEURAL NETWORK SYSTEM - Embodiments of the invention relate to a multi-scale spatio-temporal neural network system. One embodiment comprises a neural network including multiple heterogeneous neuron populations that operate at different time scales. Each neuron population comprises at least one digital neuron. Each neuron population further comprises a time scale generation circuit that controls timing for operation of said neuron population, wherein each neuron of said neuron population integrates neuronal firing events at a time scale corresponding to said neuron population. The neural network further comprises a plurality of synapses interconnecting the neurons, wherein each synapse interconnects a neuron with another neuron. At least one neuron receives neuronal firing events from an interconnected neuron that operates at a different time scale. | 10-22-2015 |
20150347897 | STRUCTURAL PLASTICITY IN SPIKING NEURAL NETWORKS WITH SYMMETRIC DUAL OF AN ELECTRONIC NEURON - A neural system comprises multiple neurons interconnected via synapse devices. Each neuron integrates input signals arriving on its dendrite, generates a spike in response to the integrated input signals exceeding a threshold, and sends the spike to the interconnected neurons via its axon. The system further includes multiple noruens, each noruen is interconnected via the interconnect network with those neurons that the noruen's corresponding neuron sends its axon to. Each noruen integrates input spikes from connected spiking neurons and generates a spike in response to the integrated input spikes exceeding a threshold. There can be one noruen for every corresponding neuron. For a first neuron connected via its axon via a synapse to dendrite of a second neuron, a noruen corresponding to the second neuron is connected via its axon through the same synapse to dendrite of the noruen corresponding to the first neuron. | 12-03-2015 |
20150363688 | MODELING INTERESTINGNESS WITH DEEP NEURAL NETWORKS - An “Interestingness Modeler” uses deep neural networks to learn deep semantic models (DSM) of “interestingness.” The DSM, consisting of two branches of deep neural networks or their convolutional versions, identifies and predicts target documents that would interest users reading source documents. The learned model observes, identifies, and detects naturally occurring signals of interestingness in click transitions between source and target documents derived from web browser logs. Interestingness is modeled with deep neural networks that map source-target document pairs to feature vectors in a latent space, trained on document transitions in view of a “context” and optional “focus” of source and target documents. Network parameters are learned to minimize distances between source documents and their corresponding “interesting” targets in that space. The resulting interestingness model has applicable uses, including, but not limited to, contextual entity searches, automatic text highlighting, prefetching documents of likely interest, automated content recommendation, automated advertisement placement, etc. | 12-17-2015 |
20150379395 | NEURISTOR-BASED RESERVOIR COMPUTING DEVICES - A neuristor-based reservoir computing device includes support circuitry formed in a complimentary metal oxide semiconductor (CMOS) layer, input nodes connected to the support circuitry and output nodes connected to the support circuitry. Thin film neuristor nodes are disposed over the CMOS layer with a first portion of the neuristor nodes connected to the input nodes and a second portion of the neuristor nodes connected to the output nodes. Interconnections between the neuristor nodes form a reservoir accepting input signals from the input nodes and outputting signals on the output nodes. A method for forming a neuristor-based reservoir computing device is also provided. | 12-31-2015 |
20160004957 | COMPUTER-IMPLEMENTED SIMULATED INTELLIGENCE CAPABILITIES BY NEUROANATOMICALLY-BASED SYSTEM ARCHITECTURE - Computer-implemented systems for simulated intelligence information processing comprising: a digital processing device comprising an operating system configured to perform executable instructions and a memory; a computer program including instructions executable by the digital processing device to create a hierarchical software architecture comprising: a software module for providing a functional interpretation of the prosencephalon, or parts thereof; a software module for providing a functional interpretation of the mesencephalon, or parts thereof; and a software module for providing a functional interpretation of the rhombencephalon, or parts thereof; wherein the software architecture simulates vertebrate, mammalian, primate, or human neuroanatomy. In some embodiments, the systems create simulated intelligence. | 01-07-2016 |
20160098630 | NEUROMORPHIC CIRCUIT THAT FACILITATES INFORMATION ROUTING AND PROCESSING - The disclosed embodiments relate to a system that selectively propagates information through a neuromorphic circuit comprising a set of interconnected neurons. During operation, a neuron in the set of neurons receives information-carrying current pulses from one or more upstream information-carrying neurons, wherein the information-carrying current pulses are insufficient to cause the neuron to generate output current pulses. The neuron also receives selectively generated gating current pulses from one or more gating neurons, wherein the gating current pulses cause a neural voltage of the neuron to approach a firing threshold. In this way, concurrently received information-carrying current pulses combine with the gating current pulses to cause the neural voltage to exceed the firing threshold, which causes the neuron to generate output current pulses that propagate to downstream neurons. | 04-07-2016 |
20160125289 | MAPPING GRAPHS ONTO CORE-BASED NEUROMORPHIC ARCHITECTURES - Embodiments of the invention provide a method for mapping a bipartite graph onto a neuromorphic architecture comprising of a plurality of interconnected neuromorphic core circuits. The graph includes a set of source nodes and a set of target nodes. The method comprises, for each source node, creating a corresponding splitter construct configured to duplicate input. Each splitter construct comprises a first portion of a core circuit. The method further comprises, for each target node, creating a corresponding merger construct configured to combine input. Each merger construct comprises a second portion of a core circuit. Source nodes and target nodes are connected based on a permutation of an interconnect network interconnecting the core circuits. | 05-05-2016 |
20160155048 | HARDWARE ENHANCEMENTS TO RADIAL BASIS FUNCTION WITH RESTRICTED COULOMB ENERGY LEARNING AND/OR K-NEAREST NEIGHBOR BASED NEURAL NETWORK CLASSIFIERS | 06-02-2016 |
20160196489 | ARTIFICIAL NEURON | 07-07-2016 |
20160203400 | NEUROMORPHIC MEMORY CIRCUIT | 07-14-2016 |
20160253588 | Neuromimetic Circuit and Method of Fabrication | 09-01-2016 |
20160379108 | DEEP NEURAL NETWORK PARTITIONING ON SERVERS - A method is provided for implementing a deep neural network on a server component that includes a host component including a CPU and a hardware acceleration component coupled to the host component. The deep neural network includes a plurality of layers. The method includes partitioning the deep neural network into a first segment and a second segment, the first segment including a first subset of the plurality of layers, the second segment including a second subset of the plurality of layers, configuring the host component to implement the first segment, and configuring the hardware acceleration component to implement the second segment. | 12-29-2016 |
20170236051 | Intelligent Autonomous Feature Extraction System Using Two Hardware Spiking Neutral Networks with Spike Timing Dependent Plasticity | 08-17-2017 |
20190147318 | Highly Efficient Convolutional Neural Networks | 05-16-2019 |
706028000 | Modular | 6 |
20110125688 | System and method for output of assessment of physical entity attribute effects on physical environments through in part social networking service input - A system includes, but is not limited to: one or more obtaining assessment information modules configured for obtaining assessment information for at least one of one or more physical entities, the assessment information based at least in part upon status information about one or more physical attributes associated with the one or more physical entities, the one or more physical attributes each being perceived by one or more humans as being capable of having one or more effects upon one or more physical environments, and the assessment information based at least in part upon input information from at least one of the one or more humans through at least in part one or more social networking services, the input information associated with at least one of the one or more physical attributes, and one or more output information modules configured for outputting output information based at least in part upon one or more elements of the assessment information. In addition to the foregoing, other related system/system aspects are described in the claims, drawings, and text forming a part of the present disclosure. | 05-26-2011 |
20120109866 | COMPACT COGNITIVE SYNAPTIC COMPUTING CIRCUITS - Embodiments of the invention relate to producing spike-timing dependent plasticity using electronic neurons interconnected in a crossbar array network. The crossbar array network comprises a plurality of crossbar arrays. Each crossbar array comprises a plurality of axons and a plurality of dendrites such that the axons and dendrites are transverse to one another, and multiple synapse devices, wherein each synapse device is at a cross-point junction of the crossbar array coupled between a dendrite and an axon. The crossbar arrays are spatially in a staggered pattern providing a staggered crossbar layout of the synapse devices. | 05-03-2012 |
20120284218 | NEURON DEVICE AND NEURAL NETWORK - A neuron device includes a bottom electrode, a top electrode, and a layer of metal oxide variable resistance material sandwiched between the bottom electrode and the top electrode, in which the neuron device is switched to a normal state upon application of reset pulse, and is switched to an excitation state upon application of stimulus pulses. The neuron device has a comprehensive response to different amplitude, different width of a stimulus voltage pulse and different number of a sequence of stimulus pulses, and provides functionalities of a weighting section and a computing section. The neuron device has a simple structure, excellent scalability, quick speed, low operation voltage, and is compatible with the conventional silicon-based CMOS fabrication process, and thus suitable for mass production. The neuron device is capable of performing many biological functions and complex logic operations. | 11-08-2012 |
20140222740 | CONSOLIDATING MULTIPLE NEUROSYNAPTIC CORES INTO ONE MEMORY - Embodiments of the invention relate to a neural network system comprising a single memory block for multiple neurosynaptic core modules. One embodiment comprises a neural network system including a memory array that maintains information for multiple neurosynaptic core modules. Each neurosynaptic core module comprises multiple neurons. The neural network system further comprises at least one logic circuit. Each logic circuit receives neuronal firing events targeting a neurosynaptic core module of the neural network system, and said logic circuit integrates the firing events received based on information maintained in said memory for said neurosynaptic core module. | 08-07-2014 |
20150363690 | UNSUPERVISED, SUPERVISED, AND REINFORCED LEARNING VIA SPIKING COMPUTATION - The present invention relates to unsupervised, supervised and reinforced learning via spiking computation. The neural network comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of edges interconnects the plurality of neural modules. Each edge interconnects a first neural module to a second neural module, and each edge comprises a weighted synaptic connection between every neuron in the first neural module and a corresponding neuron in the second neural module. | 12-17-2015 |
20160132765 | POWER-DRIVEN SYNTHESIS UNDER LATENCY CONSTRAINTS - Embodiments of the present invention relate to meeting latency constraints in a multi-core neurosynaptic network. In one embodiment of the present invention, a method of and computer program product for power-driven synthesis under latency constraints is provided. Power consumption of a neurosynaptic network is modeled as wire length. The neurosynaptic network comprises a plurality of neurosynaptic cores. Each of the plurality of neurosynaptic cores is modeled as a node in a placement graph. The graph has a plurality of edges. A weight is assigned to each of the plurality of edges based on a spike frequency. An arrangement of the neurosynaptic cores is determined. The arrangement comprises a length of each of the plurality of edges. A maximum length is compared to the length of each of the plurality of edges. The weight of at least one of the plurality of edges is increased where the length is greater than the maximum length. | 05-12-2016 |
706029000 | Lattice | 11 |
20130339281 | MULTI-PROCESSOR CORTICAL SIMULATIONS WITH RECIPROCAL CONNECTIONS WITH SHARED WEIGHTS - Embodiments of the invention relate to distributed simulation frameworks that provide reciprocal communication. One embodiment comprises interconnecting neuron groups on different processors via a plurality of reciprocal communication pathways, and facilitating the exchange of reciprocal spiking communication between two different processors using at least one Ineuron module. Each processor includes at least one neuron group. Each neuron group includes at least one electronic neuron. | 12-19-2013 |
20140019393 | UNIVERSAL, ONLINE LEARNING IN MULTI-MODAL PERCEPTION-ACTION SEMILATTICES - In one embodiment, the present invention provides a method for interconnecting neurons in a neural network. At least one node among a first set of nodes is interconnected to at least one node among a second set of nodes, and nodes of the first and second set are arranged in a lattice. At least one node of the first set represents a sensory-motor modality of the neural network. At least one node of the second set is a union of at least two nodes of the first set. Each node in the lattice has an acyclic digraph comprising multiple vertices and directed edges. Each vertex represents a neuron population. Each directed edge comprises multiple synaptic connections. Vertices in different acyclic digraphs are interconnected using an acyclic bottom-up digraph. The bottom-up digraph has a corresponding acyclic top-down digraph. Vertices in the bottom-up digraph are interconnected to vertices in the top-down digraph. | 01-16-2014 |
20140025614 | METHODS AND DEVICES FOR PROGRAMMING A STATE MACHINE ENGINE - A state machine engine having a program buffer. The program buffer is configured to receive configuration data via a bus interface for configuring a state machine lattice. The state machine engine also includes a repair map buffer configured to provide repair map data to an external device via the bus interface. The state machine lattice includes multiple programmable elements. Each programmable element includes multiple memory cells configured to analyze data and to output a result of the analysis. | 01-23-2014 |
20140067742 | HYBRID INTERCONNECT STRATEGY FOR LARGE-SCALE NEURAL NETWORK SYSTEMS - A plurality of chips arranged in a certain layout so as to face free space, and one or more optical elements are included. In the case where signal traffic for electrical communication with a given chip exceeds or is expected to exceed a certain threshold, a plurality of chips involved in communication routing of the excess signal traffic are identified, part of related signal traffic that crosses the plurality of identified chips is converted from an electric signal into an optical signal to re-route the excess signal traffic, and paths of the related signal traffic are dynamically adapted from fixed wired paths between the plurality of chips to optical communication paths formed in the free space. | 03-06-2014 |
20150088797 | SYNAPSE CIRCUITS FOR CONNECTING NEURON CIRCUITS, UNIT CELLS COMPOSING NEUROMORPHIC CIRCUIT, AND NEUROMORPHIC CIRCUITS - Example embodiments relate to a synapse circuit connecting neuron circuits by using two memristors so as to enhance symmetry, a neuromorphic circuit using the same, and a unit cell composing the neuromorphic circuit. | 03-26-2015 |
20150112911 | COUPLING PARALLEL EVENT-DRIVEN COMPUTATION WITH SERIAL COMPUTATION - The present invention provides a system comprising a neurosynaptic processing device including multiple neurosynaptic core circuits for parallel processing, and a serial processing device including at least one processor core for serial processing. Each neurosynaptic core circuit comprises multiple electronic neurons interconnected with multiple electronic axons via a plurality of synapse devices. The system further comprises an interconnect circuit for coupling the neurosynaptic processing device with the serial processing device. The interconnect circuit enables the exchange of data packets between the neurosynaptic processing device and the serial processing device. | 04-23-2015 |
20160125288 | Physically Unclonable Functions Using Neuromorphic Networks - The disclosure describes the use of a neural network circuit, such as an oscillatory neural network or cellular neural network, to serve as a physically unclonable function on an integrated circuit or within an electronic system. The manufacturing process variations that impact the initial state of the neural network parameters are used to provide the unique identification for the physically unclonable function. A challenge signal to the neural network results in a response that is unique to the circuits process variations. The neural network is designed such that there are random variations among manufactured circuits, but that the specific instance variations are sufficiently deterministic with respect to circuit aging and environmental conditions such as temperature and supply voltage. | 05-05-2016 |
20160132767 | POWER DRIVEN SYNAPTIC NETWORK SYNTHESIS - Embodiments of the present invention relate to providing power minimization in a multi-core neurosynaptic network. In one embodiment of the present invention, a method of and computer program product for power-driven synaptic network synthesis is provided. Power consumption of a neurosynaptic network is modeled as wire length. The neurosynaptic network comprises a plurality of neurosynaptic cores. An arrangement of the synaptic cores is determined by minimizing the wire length. | 05-12-2016 |
20160155047 | MULTI-COMPARTMENT NEURONS WITH NEURAL CORES | 06-02-2016 |
20160189025 | SYSTEMS AND METHODS FOR CROWD-VERIFICATION OF BIOLOGICAL NETWORKS - Systems and methods are provided for curating and disseminating a network model. A representation of a network model is provided, and data is received that is representative of user actions. The user actions are directed to at least one element of the network model. A score is assigned to each respective element based on a number of user actions received for the respective element. A verified subset of edges is identified that have assigned scores that exceed a verification threshold, and a rejected subset of edges is identified that have assigned scores that are below a rejection threshold. The verified subset of edges and the associated nodes are provided as a curated network model, which omits the rejected subset of edges. | 06-30-2016 |
20190147330 | NEURAL NETWORK CIRCUIT | 05-16-2019 |
706030000 | Recurrent | 4 |
20080201284 | Computer-Implemented Model of the Central Nervous System - A computer-implemented model of the central nervous system includes at least one of a basal ganglia portion, a cerebral cortex portion coupled to the basal ganglia portion, or a cerebellum portion coupled to the cerebral cortex. Each one of the basal ganglia portion, the cerebral cortex portion, and the cerebellum portion is comprised of respective elements representative of real neuroanatomical structures of a central nervous system and the respective elements are adapted to perform functions representative of real neuroanatomical functions of the central nervous system. At least one of the basal ganglia portion, the cerebral cortex portion, or the cerebellum portion is adapted to control a plant. | 08-21-2008 |
20130268473 | NEURAL NETWORK DESIGNING METHOD AND DIGITAL-TO-ANALOG FITTING METHOD - A neural network designing method forms a RNN (Recurrent Neural Network) circuit to include a plurality of oscillating RNN circuits configured to output natural oscillations, and an adding circuit configured to obtain a sum of outputs of the plurality of oscillating RNN circuits, and inputs discrete data to the plurality of oscillating RNN circuits in order to compute a fitting curve with respect to the discrete data output from the adding circuit. | 10-10-2013 |
20130318020 | ANALOG PROGRAMMABLE SPARSE APPROXIMATION SYSTEM - A system and device for solving sparse algorithms using hardware solutions is described. The hardware solution can comprise one or more analog devices for providing fast, energy efficient solutions to small, medium, and large sparse approximation problems. The system can comprise sub-threshold current mode circuits on a Field Programmable Analog Array (FPAA) or on a custom analog chip. The system can comprise a plurality of floating gates for solving linear portions of a sparse signal. The system can also comprise one or more analog devices for solving non-linear portions of sparse signal. | 11-28-2013 |
20220138525 | MEMORY NETWORK METHOD BASED ON AUTOMATIC ADDRESSING AND RECURSIVE INFORMATION INTEGRATION - A memory network method based on automatic addressing and recursive information integration. The method is based on a memory neural network framework integrating automatic addressing and recursive information, and is an efficient and lightweight memory network method. A memory is read and written by means of an automatic addressing operation having low time and space complexity, and memory information is effectively utilized by a novel computing unit. The whole framework has the characteristics of high efficiency, high speed and high universality. The method is suitable for various time sequence processing tasks, and shows the performance superior to that of the conventional LSTM and the previous memory network. | 05-05-2022 |
706031000 | Multilayer feedforward | 9 |
20080319933 | Architecture, system and method for artificial neural network implementation - An architecture, systems and methods for a scalable artificial neural network, wherein the architecture includes: an input layer; at least one hidden layer; an output layer; and a parallelization subsystem configured to provide a variable degree of parallelization to the input layer, at least one hidden layer, and output layer. In a particular case, the architecture includes a back-propagation subsystem that is configured to adjust weights in the scalable artificial neural network in accordance with the variable degree of parallelization. Systems and methods are also provided for selecting an appropriate degree of parallelization based on factors such as hardware resources and performance requirements. | 12-25-2008 |
20080319934 | Neural Network, a Device for Processing Information, a Method of Operating a Neural Network, a Program Element and a Computer-Readable Medium - A neural network ( | 12-25-2008 |
20130212053 | FEATURE EXTRACTION DEVICE, FEATURE EXTRACTION METHOD AND PROGRAM FOR SAME - A feature extraction device according to the present invention includes a neural network including neurons each including at least one expressed gene which is an attribute value for determining whether transmission of a signal from one of the first neurons to one of the second neurons is possible, each first neuron having input data resulting from target data to be subjected to feature extraction outputs a first signal value to corresponding second neuron(s) having the same expressed gene as the one in the first neuron, the first signal value increasing as a value of the input data increases, and each second neuron calculates, as a feature quantity of the target data, a second signal value corresponding to a total sum of the first signal values input thereto. | 08-15-2013 |
20140180989 | SYSTEM AND METHOD FOR PARALLELIZING CONVOLUTIONAL NEURAL NETWORKS - A parallel convolutional neural network is provided. The CNN is implemented by a plurality of convolutional neural networks each on a respective processing node. Each CNN has a plurality of layers. A subset of the layers are interconnected between processing nodes such that activations are fed forward across nodes. The remaining subset is not so interconnected. | 06-26-2014 |
20150379393 | MULTIPLEXING PHYSICAL NEURONS TO OPTIMIZE POWER AND AREA - Embodiments of the invention relate to a multiplexed neural core circuit. One embodiment comprises a core circuit including a memory device that maintains neuronal attributes for multiple neurons. The memory device has multiple entries. Each entry maintains neuronal attributes for a corresponding neuron. The core circuit further comprises a controller for managing the memory device. In response to neuronal firing events targeting one of said neurons, the controller retrieves neuronal attributes for the target neuron from a corresponding entry of the memory device, and integrates said firing events based on the retrieved neuronal attributes to generate a firing event for the target neuron. | 12-31-2015 |
20160042270 | BRAIN EMULATOR SUPPORT SYSTEM - A technology to build emulated nervous systems is presented here, as well as the interface method for operating the emulated nervous system. The technology provides for inclusion of neuroanatomically accurate definitions organized hierarchically. This permits a highly realistic nervous system to be created and interact with its surrounding environment. | 02-11-2016 |
20160140434 | METHOD FOR PSEUDO-RECURRENT PROCESSING OF DATA USING A FEEDFORWARD NEURAL NETWORK ARCHITECTURE - Recurrent neural networks are powerful tools for handling incomplete data problems in machine learning thanks to their significant generative capabilities. However, the computational demand for algorithms to work in real time applications requires specialized hardware and software solutions. We disclose a method for adding recurrent processing capabilities into a feedforward network without sacrificing much from computational efficiency. We assume a mixture model and generate samples of the last hidden layer according to the class decisions of the output layer, modify the hidden layer activity using the samples, and propagate to lower layers. For an incomplete data problem, the iterative procedure emulates feedforward-feedback loop, filling-in the missing hidden layer activity with meaningful representations. | 05-19-2016 |
20160189026 | Running Time Prediction Algorithm for WAND Queries - A prediction method for estimating the running time of WAND queries executed on a Web search engine which includes an off-line component using the Discrete Fourier Transform to models the index as a collection of signals to obtain characteristic vectors for query terms and an on-line feed-forward neural network with back-propagation to estimate the time required to process the incoming queries. The DFT is used to obtain values for six characteristics of the posting lists associated with the query terms. These characteristics are used to train a neuronal network which is used to predict the query execution time. | 06-30-2016 |
20170236053 | Configurable and Programmable Multi-Core Architecture with a Specialized Instruction Set for Embedded Application Based on Neural Networks | 08-17-2017 |