Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Digital neural network

Subclass of:

706 - Data processing: artificial intelligence

706015000 - NEURAL NETWORK

706026000 - Structure

Patent class list (only not empty are listed)

Deeper subclasses:

Class / Patent application numberDescriptionNumber of patent applications / Date published
706041000 Digital neural network 8
20080243741METHOD AND APPARATUS FOR DEFINING AN ARTIFICIAL BRAIN VIA A PLURALITY OF CONCEPT NODES CONNECTED TOGETHER THROUGH PREDETERMINED RELATIONSHIPS - A method for defining a network of nodes is provided, each representing a unique concept, and making connections between individual concepts through unique relationships to other concepts. Each of the nodes is operable to store a unique identifier in the network and information regarding the concept in addition to the unique relationships.10-02-2008
20140337263ARCHITECTURE FOR IMPLEMENTING AN IMPROVED NEURAL NETWORK - Disclosed is an improved approach to implement artificial neural networks. According to some approaches, an advanced neural network is implemented using an internet-of-things methodology, in which a large number of ordinary items having RFID technology are utilized as the vast infrastructure of a neural network.11-13-2014
20160196488NEURAL NETWORK COMPUTING DEVICE, SYSTEM AND METHOD07-07-2016
20160203401ELECTRONIC CIRCUIT, IN PARTICULAR CAPABLE OF IMPLEMENTING A NEURAL NETWORK, AND NEURAL SYSTEM07-14-2016
706042000 Parallel connection 3
20090006296DMA ENGINE FOR REPEATING COMMUNICATION PATTERNS - A parallel computer system is constructed as a network of interconnected compute nodes to operate a global message-passing application for performing communications across the network. Each of the compute nodes includes one or more individual processors with memories which run local instances of the global message-passing application operating at each compute node to carry out local processing operations independent of processing operations carried out at other compute nodes. Each compute node also includes a DMA engine constructed to interact with the application via Injection FIFO Metadata describing multiple Injection FIFOs where each Injection FIFO may containing an arbitrary number of message descriptors in order to process messages with a fixed processing overhead irrespective of the number of message descriptors included in the Injection FIFO.01-01-2009
20140330763APPARATUS AND METHODS FOR DEVELOPING PARALLEL NETWORKS USING A GENERAL PURPOSE PROGRAMMING LANGUAGE - Apparatus and methods for developing parallel networks. Parallel network design may comprise a general purpose language (GPC) code portion and a network description (ND) portion. GPL tools may be utilized in designing the network. The GPL tools may be configured to produce network specification language (NSL) engine adapted to generate hardware optimized machine executable code corresponding to the network description. The developer may be enabled to describe a parameter of the network. The GPC portion may be automatically updated consistent with the network parameter value. The GPC byte code may be introspected by the NSL engine to provide the underlying source code that may be automatically reinterpreted to produce the hardware optimized machine code. The optimized machine code may be executed in parallel.11-06-2014
20150039547NEUROMOPHIC SYSTEM AND CONFIGURATION METHOD THEREOF - A method of generating neuron spiking pulses in a neuromorphic system is provided which includes floating one or more selected bit lines connected to target cells, having a first state, from among a plurality of memory cells arranged at intersections of a plurality of word lines and a plurality of bit lines; and stepwisely increasing voltages applied to unselected word lines connected to unselected cells, having a second state, from among memory cells connected to the one or more selected bit lines other than the target cells having the first state.02-05-2015
706043000 Digital neuron processor 1
20100161533ADDRESSING SCHEME FOR NEURAL MODELING AND BRAIN-BASED DEVICES USING SPECIAL PURPOSE PROCESSOR - A special purpose processor (SPP) can use a Field Programmable Gate Array (FPGA) to model a large number of neural elements. The FPGAs or similar programmable device can have multiple cores doing presynaptic, postsynaptic, and plasticity calculations in parallel. Each core can implement multiple neural elements of the neural model.06-24-2010
Website © 2025 Advameg, Inc.