Entries |
Document | Title | Date |
20080209121 | Serial Content Addressable Memory - A technique is presented for implementing a content addressable memory (CAM) function using traditional memory, where the input data is serially loaded into a serial CAM. Various additions, which allow for predicting the result of a serial CAM access coincident with the completion of serially inputting the data are also presented. | 08-28-2008 |
20080222352 | METHOD, SYSTEM AND PROGRAM PRODUCT FOR EQUITABLE SHARING OF A CAM TABLE IN A NETWORK SWITCH IN AN ON-DEMAND ENVIRONMENT - A method, system and program product for equitable sharing of a CAM (Content Addressable Memory) table among multiple users of a switch. The method includes reserving buffers in the table to be shared, the remaining buffers being allocated to each user. The method further includes establishing whether or not an address contained in a packet from a user is listed in a buffer in the table, if the address is listed, updating a time-to-live value for the buffer for forwarding the packet and, if the address is not listed, determining whether or not the user has exceeded its allocated buffers and whether or not the reserved buffers have been exhausted, such that, if the user has exceeded its allocated buffers and the reserved buffers have been exhausted, the address is not added to the table and the user is precluded from using any additional buffers in the network switch. | 09-11-2008 |
20080229008 | Sharing physical memory locations in memory devices - A memory structure includes a plurality of address banks where each address bank is operative to store a memory address. In certain embodiments, at least two of the address banks share physical memory locations for at least one redundant most significant bit. Additionally, at least two of the address banks in certain embodiments share physical memory locations for at least one redundant most significant bit and at least one redundant least significant bit. At least two of the address banks in certain embodiments also share physical memory locations for at least one redundant interior bit. | 09-18-2008 |
20080244169 | Apparatus for Efficient Streaming Data Access on Reconfigurable Hardware and Method for Automatic Generation Thereof - A content addressable memory (CAM) is disclosed that includes a memory having a first port configured to write a 1-bit data to the memory and a second port configured to read and write N-bit data. To update the CAM, an N-bit zero data word is written to the second port at a first address A | 10-02-2008 |
20080244170 | Intelligent allocation of programmable comparison operations for reducing the number of associative memory entries required - Intelligent allocation of programmable comparison operations may reduce the number of associative memory entries required for programming an associative memory (e.g., ternary content-addressable memory) with multiple matching definitions (e.g., access control list entries, routing information, etc.), which may be particularly useful in identifying packet processing operations to be performed on packets in a packet switching device. The higher-cost comparison operations, in terms of the number of associative memory entries required to natively support such operations, are allocated to one or more comparison evaluators (e.g., programmable logic and/or processing elements configured to evaluate one or more comparison operations) configured to evaluate an input value with one or more of the programmable comparison operations in order to generate and provide one or more values representing results of the evaluations to one or more associative memories for use in identifying the packet processing operations. | 10-02-2008 |
20080263269 | KEY SELECTION DEVICE AND PROCESS FOR CONTENT-ADDRESSABLE MEMORY - A method and a computer readable medium having executable instructions are provided. The method and instructions when executed generates a first look-up key from a group of look-up key units stored in a data storage, generation of the first look up key being completed prior to the completion of a key generation processing cycle. A next look-up key unit from the group of look-up key units stored in the data storage may be skipped over when the next look up key corresponds to a second look-up key that has a key length equal to or smaller than a predetermined key length. A third look-up key unit may be selected from the group of look-up key units, the third look-up key unit associated with a third look-up key having a key length greater than a second predetermined key length, the second predetermined key length being greater than the first predetermined key length. The first look-up key and a portion of the third look-up key sequentially may be output during the same output processing cycle. | 10-23-2008 |
20080263270 | Method and apparatus for overlaying flat and/or tree based data sets onto content addressable memory (CAM) device - A content addressable memory device ( | 10-23-2008 |
20080270684 | Content Addressed Storage device configured to maintain content address mapping - A content addressed storage device configured to maintain content address mapping is disclosed. A data object to be stored on the content addressed storage device and a local data object identifier by which the data object is known to the sending source are received from a sending source. A content address to be associated with the data object on the content addressed storage device is determined based at least in part on the contents of the data object. The data object is stored on the content addressed storage device in a storage location associated with the content address. A mapping that associates the local data object identifier with the content address is maintained on the content addressed storage device. | 10-30-2008 |
20080276039 | Method and System for Emulating Content-Addressable Memory Primitives - A method and system for emulating content-addressable memory (CAM) primitives (e.g., a read operation) is disclosed. According to one embodiment, a method is provided for emulating a read operation on a plurality of CAM elements utilizing a read input including match input data and a CAM element selection index. In the described method, match reference data is distributed among a plurality of random-access memory (RAM) elements by storing match reference data corresponding to each of the plurality of CAM elements within a first RAM element of the plurality. Thereafter, a first record is identified within the first RAM element utilizing a first portion of the match input data and the CAM element selection index. A read operation result is then generated utilizing the first record. | 11-06-2008 |
20080288720 | MULTI-WAFER 3D CAM CELL - A multi-wafer CAM cell in which the negative effects of increased travel distance have been substantially reduced is provided. The multi-wafer CAM cell is achieved in the present invention by utilizing three-dimensional integration in which multiple active circuit layers are vertically stack and vertically aligned interconnects are employed to connect a device from one of the stacked layers to another device in another stack layer. By vertically stacking multiple active circuit layers with vertically aligned interconnects, each compare port of the inventive CAM cell can be implemented on a separate layer above or below the primary data storage cell. This allows the multi-wafer CAM structure to be implemented within the same area footprint as a standard Random Access Memory (RAM) cell, minimizing data access and match compare delays. | 11-20-2008 |
20080288721 | TRANSPOSING OF BITS IN INPUT DATA TO FORM A COMPARAND WITHIN A CONTENT ADDRESSABLE MEMORY - An apparatus and method of transposing one or more bits in input data relative to other bits of the input data to form a comparand for searching in a content addressable memory. The comparand may have one or more bits rearranged from their order appearing in the input data such that one or more bits from a first segment of the input data are replaced with, or substituted by, one or more bits from a second segment of the input data. | 11-20-2008 |
20080301362 | Content addressable memory address resolver - Systems, devices, and methods, including executable instructions are provided for resolving content addressable memory (CAM) match address priority. One method includes retaining a first match address as the best match address. Subsequent match addresses are compared to the retained best match address, each match address being associated with a compare cycle during which a selected columnar portion of each CAM entry is compared to a corresponding portion of a search term. The best match address is updated as a result of the comparison. | 12-04-2008 |
20080320216 | Translation Lookaside Buffer and Related Method and Program Product Utilized For Virtual Addresses - A program product, a translation lookaside buffer and a related method for operating the TLB is provided. The method comprises the steps of: a) when adding an entry for a virtual address to said TLB testing whether the attribute data of said virtual address is already stored in said CAM and if the attribute data is not stored already in said CAM, generating tag data for said virtual address such that said tag data is different from the tag data generated for the other virtual addresses currently stored in said RAM and associated to the new entry in said CAM for the attribute data, adding the generated tag data to said RAM and to the associated entry in said CAM, and setting a validity flag in said CAM for said associated entry; else if the attribute data is stored already in said CAM, adding the stored attribute data to the entry in said RAM for said virtual address; and when performing a TLB lookup operation: reading the validity flag and the tag data from the entry in said CAM, which is associated to the entry in said RAM for said virtual address, and simultaneously reading the absolute address and the tag data from the entry in said RAM for said virtual address, and generating a TLB hit only if the tag data read from said CAM is valid and matches the tag data read from said RAM. | 12-25-2008 |
20090019220 | Method of Filtering High Data Rate Traffic - A method of filtering high data rate traffic ( | 01-15-2009 |
20090043956 | Mapping an input data value to a resultant data value - A data processing apparatus operable to map an input data value | 02-12-2009 |
20090043957 | Method and Apparatus for Updating Data in ROM Using a CAM - A method for providing field updates through the use of a memory emulation circuit with a content addressable memory (CAM) as the intelligent portion of the emulation circuit's arbiter. CAM circuit | 02-12-2009 |
20090063762 | Content-Addressable Memories and State Machines for Performing Three-Byte Matches and for Providing Error Protection - A method and system for detecting matching strings in a string of characters utilizing content addressable memory is disclosed. | 03-05-2009 |
20090070525 | SEMICONDUCTOR MEMORY DEVICE - A CAM (Content Addressable Memory) cell includes first and second data storage portions storing data, horizontal port write gates for storing data applied through a match line pair in the data storage portions in a data write through a horizontal port, and search/read gates for driving the match lines of the match line pair in accordance with the data stored in the data storage portions in a search operation and in a data read through the horizontal port. The match lines are used as horizontal bit line pair, or signal lines for accessing the horizontal port. As the first and second data storage portions are used, it becomes possible to store ternary data, and accordingly, a write mask function of inhibiting a data write at a destination of data transfer is realized. Further, as the CAM cell is used, an arithmetic/logic operation following a search process can be executed selectively, and high speed data writing/reading becomes possible. | 03-12-2009 |
20090077308 | RECONFIGURABLE CONTENT-ADDRESSABLE MEMORY - A system for determining memory addresses including a first content-addressable memory (CAM) configured to generate a first matchvector based on a first key; a first inverse-mask-reverse (IMR) module operatively connected to the first CAM, where the first IMR module is configured to generate a first auxiliary matchvector based on the first matchvector; and a first priority encoder (PE) operatively connected to the first IMR module, where the first PE is configured to output a first encoded memory address based on the first auxiliary matchvector, where the first CAM, the first IMR module, and the first PE are associated with a first reconfigurable content-addressable memory (RCAM). | 03-19-2009 |
20090100219 | METHOD AND APPARATUS FOR EFFICIENT CAM LOOKUP FOR INTERNET PROTOCOL ADDRESSES - A method and apparatus adapted to perform content addressable memory (CAM) lookup by performing a lookup in parallel using multiple classification rules in the CAM with the same key, wherein the CAM lookup is used to resolve IPv4 and IPv6 addresses. | 04-16-2009 |
20090113122 | CONTENT ADDRESSABLE MEMORY DEVICE HAVING MATCH LINE EQUALIZER CIRCUIT - In a content addressable memory device, before search operations in two TCAM cells connected to first and second match lines, respectively, a memory controller connects the first match line to a power source and connects the second match line to a ground, and then connects the first and second match lines to each other so as that electric potentials of the first and second match lines are the same as each other. | 04-30-2009 |
20090125674 | Method for polymorphic and systemic structuring of associative memory via a third-party manager - The present invention relates to a method for polymorphic and systemic structuring of associative memory via a third-party manager that allows a human or electronic operator to manage various families of associative memory for various applications. | 05-14-2009 |
20090150603 | LOW POWER TERNARY CONTENT-ADDRESSABLE MEMORY (TCAMS) FOR VERY LARGE FORWARDING TABLES - Ternary content-addressable memories (TCAMs) may be used to obtain a simple and very fast implementation of a router's forwarding engine. The applicability of TCAMs is, however, limited by their size and high power requirement. The present invention provides an improved method and associated algorithms to reduce the power needed to search a forwarding table using a TCAM. Additionally, the present invention teaches how to couple TCAMs and high bandwidth SRAMs so as to overcome both the power and size limitations of a pure TCAM forwarding engine. By using one of the novel TCAM-SRAM coupling schemes (M-12Wb), TCAM memory is reduced by a factor of about 5 on IPv4 data sets and by a factor of about 2.5 on IPv6 data sets; TCAM power requirement is reduced by a factor of about 10 on IPv4 data sets and by a factor of about 6 on IPv6 data sets. | 06-11-2009 |
20090150604 | Semiconductor Device - The range-specified IP addresses are effectively stored to reduce the number of necessary entries thereby the memory capacity of TCAM is improved. The representative means of the present invention is that: the storage information (entry) and the input information (comparison information or search key) are the common block code such that any bit must be the logical value ‘1’; Match-lines are hierarchically structured and memory cells are arranged at the intersecting points of a plurality of sub-match lines and a plurality of search lines; Further the sub-match lines are connected to main-match lines through the sub-match detectors, respectively and main-match detectors are arranged on the main-match lines. | 06-11-2009 |
20090182938 | CONTENT ADDRESSABLE MEMORY AUGMENTED MEMORY - Embodiments of the present disclosure provide methods, apparatuses, and systems including a memory device including content addressable memory configured to store an address associated with one or more memory cells while an access operation is performed on the one or more memory cells. Other embodiments may be described. | 07-16-2009 |
20090198881 | MEMORY SYSTEM - A memory system including: a memory device; an ECC system installed in the memory device so as to generate a warning signal in case there are uncorrectable errors; an address generating circuit for generating internal addresses in place of bad area addresses in accordance with the waning signal, the progressing of the internal addresses being selected as to avoid address collision with the address progressing of the memory device at least at the beginning; and a CAM for storing the internal addresses as substitutive area addresses, the CAM being referred to at an access time of the memory device so as to generate the substitutive area addresses in place of the bad area addresses in accordance with the warning signal. | 08-06-2009 |
20090240875 | CONTENT ADDRESSABLE MEMORY WITH HIDDEN TABLE UPDATE, DESIGN STRUCTURE AND METHOD - Disclosed are embodiments of memory circuit having two discrete memory devices with two discrete memory arrays that store essentially identical data banks. The first device is a conventional memory adapted to perform all maintenance operations that require read functions (i.e., all update and refresh operations). The second device is a DRAM-based CAM device adapted to perform parallel search and overwrite operations only. Performance of overwrite operations by the second device occurs in conjunction with performance of maintenance operations by the first device so that corresponding memory cells in the two devices store essentially identical data values. Since the data banks in the memory devices are essentially identical and since maintenance and parallel search operations are not performed by the same device, the parallel search operations can be performed without interruption. Also disclosed are embodiments of an associated design structure and method. | 09-24-2009 |
20090248973 | System and method for providing address decode and virtual function (VF) migration support in a peripheral component interconnect express (PCIE) multi-root input/output virtualization (IOV) environment - The present invention is a method for providing address decode and Virtual Function (VF) migration support in a Peripheral Component Interconnect Express (PCIE) multi-root Input/Output Virtualization (IOV) environment. The method may include receiving a Transaction Layer Packet (TLP) from the PCIE multi-root IOV environment. The method may further include comparing a destination address of the TLP with a plurality of base address values stored in a Content Addressable Memory (CAM), each base address value being associated with a Virtual Function (VF), each VF being associated with a Physical Function (PF). The method may further include when a base address value included in the plurality of base address values matches the destination address of the TLP, providing the matching base address value to the PCIE multi-root IOV environment by outputting from the CAM the matching base address value. The method may further include constructing a requestor ID for the VF associated with the matching base address value, the requestor ID being based upon the output matching base address value and a bus number for a PF which owns the CAM. | 10-01-2009 |
20090259810 | ACCESS CONTROL LIST RULE COMPRESSION USING METER RE-MAPPING - A system may include a content addressable memory (CAM) that is configured to include multiple services, receive a key, where the key includes source port information and IP information related to a packet received on one of multiple ports, and output a match index value in response to a search of the CAM using the key. The system may include a policy memory module that is configured to receive the match index value and to output meter controls and a meter address based on the match index value, a port meter map module that is configured to receive the source port information and to output a mask value and a per port meter value, and a remapping module that is configured to receive the meter address, receive the mask value and the per port meter value, and modify the meter address based on those values. | 10-15-2009 |
20090259811 | METHOD OF PERFORMING TABLE LOOKUP OPERATION WITH TABLE INDEX THAT EXCEEDS CAM KEY SIZE - In a packet switching device or system, such as a router, switch, combination router/switch, or component thereof, a method of and system for performing a table lookup operation using a lookup table index that exceeds a CAM key size is provided. Multiple CAM accesses are performed, each using a CAM key derived from a subset of lookup table index, resulting in one or more CAM entries. One or more matching table entries are derived from the one or more CAM entries resulting from the multiple CAM accesses. | 10-15-2009 |
20090271570 | Content-Addressable Memory Lookup Operations with Error Detection - Disclosed are, inter alia, methods, apparatus, computer-storage media, mechanisms, and means associated with content-addressable memory lookup operations with error detection. Lookup operations are performed on two identical sets of content-addressable memory entries to identify two lookup results. An error detection operation is performed on the highest-priority matching entry of each set of content-addressable memory entries. An overall lookup result is determined based on the lookup and error detection results. | 10-29-2009 |
20100023683 | Associative Matrix Observing Methods, Systems and Computer Program Products Using Bit Plane Representations of Selected Segments - Associative matrix compression methods, systems, computer program products and data structures compress an association matrix that contains counts that indicate associations among pairs of attributes. Selective bit plane representations of those selected segments of the association matrix that have at least one count is performed, to allow compression. More specifically, a set of segments is generated, a respective one of which defines a subset, greater than one, of the pairs of attributes. Selective identifications of those segments that have at least one count are stored. The at least one count that is associated with a respective identified segment is also stored as at least one bit plane representation. The at least one bit plane representation identifies a value of the at least one associated count for a bit position of the count that corresponds to the associated bit plane. | 01-28-2010 |
20100023684 | METHOD AND APPARATUS FOR REDUCING POWER CONSUMPTION IN A CONTENT ADDRESSABLE MEMORY - Power consumption in a Content Addressable Memory (CAM) circuit is reduced by use of a CAM circuit. According to one embodiment of the CAM circuit, the CAM circuit includes a plurality of match lines and match line restoration circuitry. The match line restoration circuitry is configured to prevent at least one of the match lines from being restored to a pre-evaluation state responsive to corresponding enable information. | 01-28-2010 |
20100037016 | Method and system for processing access control lists using an exclusive-or sum-of-products evaluator - A method includes receiving input data comprising a plurality of bits and processing an access control list into an ESOP expression comprising a plurality of product terms. The method also includes storing a plurality of bits associated with the plurality of product terms in a TCAM comprising a plurality of rows and comparing the plurality of bits associated with the input data to the plurality of bits associated with the product terms stored in each row of the plurality of rows, such that each row of the TCAM outputs a plurality of signals, such that each of the plurality of signals indicate a match or no match for each bit stored in the selected row. The method includes receiving the plurality of signals from the plurality of rows by an ESOP evaluator and outputting an address associated with a selected row from the plurality of rows of the TCAM. | 02-11-2010 |
20100042780 | MULTIPLE MODE CONTENT-ADDRESSABLE MEMORY - According to embodiments of the invention a multi-mode memory device is provided. The memory device includes at least one content-addressable memory (CAM). The memory device further includes a first match-in bus for receiving input into a first CAM of the at least one CAM, wherein the status of the match-in bus determines a operating mode of a plurality of operating modes of the first CAM, and a match-out bus for enabling the first CAM to be coupled to another CAM module and comprises match lines of a memory portion of the first CAM, wherein if the match-in bus is disabled, first CAM is in a first mode, and if the match-in bus is enabled, the first CAM is in a second mode. | 02-18-2010 |
20100049912 | DATA CACHE WAY PREDICTION - A microprocessor includes one or more N-way caches and a way prediction logic that selectively enables and disables the cache ways so as to reduce the power consumption. The way prediction logic receives an address and predicts in which one of the cache ways the data associated with the address is likely to be stored. The way prediction logic causes an enabling signal to be supplied only to the way predicted to contain the requested data. The remaining (N−1) of the cache ways do not receive the enabling signal. The power consumed by the cache is thus significantly reduced. | 02-25-2010 |
20100070698 | CONTENT ADDRESSABLE STORAGE SYSTEMS AND METHODS EMPLOYING SEARCHABLE BLOCKS - In accordance with exemplary embodiments of the present invention, a content addressable data structure system may include directed acyclic graphs (DAGs) of data content that are addressed using both a user-defined search key and content of data blocks. Internal keys of retention roots of the DAGs may be derived from the user-defined search key while the remaining blocks may be content addressed. As opposed to using a content address, the user may provide the search key when retrieving and deleting DAGs retaining the data content. In addition, the internal keys may be implemented using internal content addressable storage operations, such as applying a hash function and employing a distributed hash table. | 03-18-2010 |
20100077141 | Adaptive Compression and Decompression - Adaptive compression and decompression techniques are described. In at least some embodiments, compression techniques include adaptively compressing a plurality of input words into a plurality of compression codes and outputting the compression codes upon encountering an end-of-file signal. In at least some embodiments, the compression codes are fewer in number than the number of unique bit patterns requiring unique compression codes under LZW (Lempel Ziv & Welch) compression. In at least some other embodiments, decompression techniques include adaptively decompressing a plurality of compressed code words into a plurality of decompressed words and outputting the decompressed words upon encountering an end-of-file signal. | 03-25-2010 |
20100082895 | MULTI-LEVEL CONTENT ADDRESSABLE MEMORY - A multi-level content addressable memory (CAM) architecture compresses out much of the redundancy encountered in the search space of a single CAM, particularly for flow-based lookups in a network. Destination and source address may be associated with internal equivalence classes independently in one level of the multi-level CAM architecture, while flow-specific properties linking arbitrary classes of the destination and source addresses may be applied in a later level of the multi-level CAM. | 04-01-2010 |
20100100671 | DOUBLE DENSITY CONTENT ADDRESSABLE MEMORY (CAM) LOOKUP SCHEME - The number of content addressable memory (CAM) lookups is reduced from two to one. Each side (left and right sides) of a CAM is programmed with network addresses, such as IP addresses, based on certain bits of the network addresses. These bits of the network addresses (which represent packet routes) are examined and used to determine whether the particular network address is to be placed on the left or right sides of the CAM. The grouping of certain network addresses either on the left or right sides of the CAM can be performed by examining an individual bit of each network address, by performing an exclusive OR (XOR) operation on a plurality of bits of each network address, and/or by searching for bit patterns of the network address in a decision table. Network addresses that cannot be readily assigned to a particular side of the CAM using these grouping techniques are programmed into both sides of the CAM. During packet routing, techniques similar to the grouping techniques that populated the CAM are used to determine which of the two sides of the CAM is to be searched. | 04-22-2010 |
20100100672 | RELAY APPARATUS AND DATA CONTROL METHOD - When a data word is designated through a network search engine, a FIFO unit, and the like, a relay apparatus according to the invention searches for an associative memory address corresponding to the data word. Even when the associative memory address is internally converted to a contents memory address, the relay apparatus stores the contents memory address by causing it to correspond to a search result corresponding to the contents memory address as well as outputs the associative memory address together with the search result. | 04-22-2010 |
20100100673 | Hierarchical immutable content-addressable memory processor - Improved memory management is provided according to a Hierarchical Immutable Content Addressable Memory Processor (HICAMP) architecture. In HICAMP, physical memory is organized as two or more physical memory blocks, each physical memory block having a fixed storage capacity. An indication of which of the physical memory blocks is active at any point in time is provided. A memory controller provides a non-duplicating write capability, where data to be written to the physical memory is compared to contents of all active physical memory blocks at the time of writing, to ensure that no two active memory blocks have the same data after completion of the non-duplicating write. | 04-22-2010 |
20100115196 | SHARED STORAGE FOR MULTI-THREADED ORDERED QUEUES IN AN INTERCONNECT - In one embodiment, payload of multiple threads between intellectual property (IP) cores of an integrated circuit are transferred, by buffering the payload using a number of order queues. Each of the queues is guaranteed access to a minimum number of buffer entries that make up the queue. Each queue is assigned to a respective thread. A number of buffer entries that make up any queue is increased, above the minimum, by borrowing from a shared pool of unused buffer entries on a first-come, first-served basis. In another embodiment, an interconnect implements a content addressable memory (CAM) structure that is shared storage for a number of logical, multi-thread ordered queues that buffer requests and/or responses that are being routed between data processing elements coupled to the interconnect. Other embodiments are also described and claimed. | 05-06-2010 |
20100122024 | METHODS AND SYSTEMS FOR DIRECTLY CONNECTING DEVICES TO MICROCONTROLLERS - Disclosed are methods and devices, among which is a device including a self-selecting bus decoder. In some embodiments, the device may be coupled to a microcontroller, and the self-selecting bus decoder may determine a response of the peripheral device to requests from the microcontroller. In another embodiment, the device may include a bus translator and a self-selecting bus decoder. The bus translator may be configured to translate between signals from a selected one of a plurality of different types of buses. A microcontroller may be coupled to a selected one of the plurality of different types of buses of the bus translator. | 05-13-2010 |
20100122025 | LOW COST IMPLEMENTATION FOR SMALL CONTENT-ADDRESSABLE MEMORIES - A content-addressable memory (CAM) for managing the reallocation of erasable objects within a non-volatile memory is conceptually separated into two tables: a first table provides verification of whether or not a logical address has been reallocated and, if so, a second table provides the physical address of the reallocated erasable object. | 05-13-2010 |
20100131703 | REDUCING CONTENT ADDRESSABLE MEMORY (CAM) POWER CONSUMPTION COUNTERS - A method may include counting the number of times each of a plurality of entries in a content addressable memory (CAM) matches one or more searches; grouping entries in the CAM into a first subset and a second subset based on the number of times each of the plurality of entries in the CAM matches one or more searches; and searching the first subset for a matching entry and, if no matching entry is found, searching the second subset for the matching entry. | 05-27-2010 |
20100138599 | SYSTEM AND METHOD FOR MATCHING PATTERNS - A pattern matching system detects strings contained in a target pattern to be detected within a data stream input by 1-byte data, and detects a regular expression representing the target pattern among regular expressions constructed by the detected strings. | 06-03-2010 |
20100138600 | REDUCING CONTENT ADDRESSABLE MEMORY (CAM) POWER CONSUMPTION COUNTERS - A method may include counting the number of times each of a plurality of entries in a content addressable memory (CAM) matches one or more searches; grouping entries in the CAM into a first subset and a second subset based on the number of times each of the plurality of entries in the CAM matches one or more searches; and searching the first subset for a matching entry and, if no matching entry is found, searching the second subset for the matching entry. | 06-03-2010 |
20100146202 | MICROPROCESSOR SYSTEMS - A distribution medium ( | 06-10-2010 |
20100161894 | DOUBLE DENSITY CONTENT ADDRESSABLE MEMORY (CAM) LOOKUP SCHEME - The number of content addressable memory (CAM) lookups is reduced from two to one. Each side (left and right sides) of a CAM is programmed with network addresses, such as IP addresses, based on certain bits of the network addresses. These bits of the network addresses (which represent packet routes) are examined and used to determine whether the particular network address is to be placed on the left or right sides of the CAM. The grouping of certain network addresses either on the left or right sides of the CAM can be performed by examining an individual bit of each network address, by performing an exclusive OR (XOR) operation on a plurality of bits of each network address, and/or by searching for bit patterns of the network address in a decision table. Network addresses that cannot be readily assigned to a particular side of the CAM using these grouping techniques are programmed into both sides of the CAM. During packet routing, techniques similar to the grouping techniques that populated the CAM are used to determine which of the two sides of the CAM is to be searched. | 06-24-2010 |
20100169563 | Content Addressable Memory and Method - A content addressable memory (CAM) includes ports through which keys having at least a 16 bit function are received or transmitted. The CAM includes a processing unit in communication with the ports. The CAM includes a storage portion in which the keys are stored in communication with a processing unit. The CAM includes a programmable key update mechanism in communication with the processing unit which updates the keys without the keys leaving the CAM. A method for using a content addressable memory (CAM) includes the steps of receiving keys having at least a 16 bit function at a port. There is the step of storing the keys in a storage portion in communication with a processing unit. There is the step of updating the keys without the keys leaving the CAM with a programmable key update mechanism in communication with a processing unit. | 07-01-2010 |
20100174859 | HIGH CAPACITY CONTENT ADDRESSABLE MEMORY - A set of data is stored in an input space of a kernel content addressable memory. The input space comprising the set of data is transformed into a feature space of higher dimension. The set of data is a set of transformed data within the feature space. An inner product is calculated between the set of transformed data in the feature space using a kernel function. | 07-08-2010 |
20100205364 | Ternary content-addressable memory - A low-heat, large-scale ternary content-addressable memory (TCAM) efficiently compares one or more input records with a set of entries. Compression may also be used. X bits are eliminated from entries and in some embodiments, a subset of non-X bits are also eliminated, minimizing entries that must be searched. Entry bit sets can be converted into sets of fields. A useful set of fields is a triplet comprising a start field, a length field, and a data field. Hashing determines the RAM line of the TCAM in which entries are stored and which RAM line is to be compared with a given input. Searches are only needed on entries in RAM lines corresponding to inputs of interest. Priority values decide the winner if more than one TCAM entry in the appropriate RAM line matches the input. Bin packing can be used to optimally allocate TCAM entries across different possible RAM lines. | 08-12-2010 |
20100228911 | ASSOCIATED MEMORY - The associative memory comprises a simplified functional processing unit (SFPU), implemented by an LUT logic network, that implements simplified CAM function g, where g is the function derived from CAM function ƒ by replacing the value showing “invalid” with the don't care, an auxiliary memory that stores the inverse function ƒ | 09-09-2010 |
20100228912 | Non-Volatile Memory With Hybrid Index Tag Array - Various embodiments of the present invention are generally directed to an apparatus and associated method for a non-volatile memory with a hybrid index tag array. In accordance with some embodiments, a memory device has a word memory array formed of non-volatile resistive sense memory (RSM) cells, a first index array formed of volatile content addressable memory (CAM) cells, and a second index array formed of non-volatile RSM cells. The memory device is configured to output word data from the word memory array during a data retrieval operation when input request data matches tag data stored in the first index array, and to copy tag data stored in the second index array to the first index array during a device reinitialization operation. | 09-09-2010 |
20100250842 | HYBRID REGION CAM FOR REGION PREFETCHER AND METHODS THEREOF - A first address is received and is used to determine a first address range. The first address range includes a second address range and a third address range. If the first address is in the second address range, a fourth address range is determined. The fourth address range is different from the first address range. Information is retrieved from a memory in response to determining that a second address is in the first address range or the fourth address range. If the first address is in the third address range, a fifth address range is determined. The fifth address range is different from the first address range. Other information is retrieved from the memory in response to determining the second address is in the first address range or the fifth address range. | 09-30-2010 |
20100250843 | HIERARCHICAL MEMORY ARCHITECTURE WITH A PHASE-CHANGE MEMORY (PCM) CONTENT ADDRESSABLE MEMORY (CAM) - A Phase-Change Memory (PCM) Content Addressable Memory (CAM) utilized to store addresses of defective rows or columns of a memory array or memories attached to a backside bus of a concentrator device. | 09-30-2010 |
20100274961 | PHYSICALLY-INDEXED LOGICAL MAP TABLE - Techniques and systems are described herein to maintain a mapping of logical to physical registers—for example, in the context of a multithreaded processor that supports renaming. A mapping unit may have a plurality of entries, each of which stores rename information for a dedicated one of a set of physical registers available to the processor for renaming. This physically-indexed mapping unit may support multiple threads, and may comprise a content-addressable memory (CAM) in certain embodiments. The mapping unit may support various combinations of read operations (to determine if a logical register is mapped to a physical register), write operations (to create or modify one or more entries containing mapping information), thread flush operations, and commit operations. More than one of such operations may be performed substantially simultaneously in certain embodiments. | 10-28-2010 |
20100287337 | NONVOLATILE MEMORY DEVICE AND METHOD OF OPERATING THE SAME - A nonvolatile memory device has a memory cell array including a memory cell group for storing option information, and a controller configured to wait for a preset period of time after a command for loading the option information has been received before performing an operation of loading the option information. | 11-11-2010 |
20100293327 | TCAM Management Approach That Minimize Movements - Methods for efficiently managing a ternary content-addressable memory (TCAM) by minimizing movements of TCAM entries include determining a first node and a second node in the TCAM, determining if there is a free TCAM entry between the first node and the second node, and storing the new entry in the free TCAM entry. Upon determining that a free TCAM entry does not exist between the first node and the second node, further determining a chain of nodes and then determining if there is a free TCAM entry in the chain of nodes. Upon determining that there is a free TCAM entry within the chain of nodes, moving the TCAM entries identified as the nodes in the chain of nodes to generate a free node nearest to the new entry and inserting the new entry in the free node. Moving the TCAM entries identified as the nodes in the chain of nodes preserves the order of the nodes. | 11-18-2010 |
20110035545 | FULLY-BUFFERED DUAL IN-LINE MEMORY MODULE WITH FAULT CORRECTION - A memory system comprises first memory that includes memory cells. Content addressable memory (CAM) includes CAM memory cells, stores addresses of selected ones of the memory cells, stores data having the addresses in corresponding ones of the CAM memory cells and retrieves data having the addresses from corresponding ones of the CAM memory cells. An adaptive refresh module stores data from selected ones of the memory cells in the CAM memory cells to one of increase and maintain a time period between refreshing of the memory cells. | 02-10-2011 |
20110047327 | SEARCHING A CONTENT ADDRESSABLE MEMORY - A method includes searching a content addressable memory based on a comparand. The comparand includes a collection of bits. A modified comparand is generated by modifying the comparand. The modified comparand is based at least in part on a comparand overlay data value. The content addressable memory is also searched with the modified comparand. | 02-24-2011 |
20110055470 | MEASURING ATTRIBUTES OF CLIENT-SERVER APPLICATIONS - In an embodiment, a packet data switching system comprises content-addressable memory configured to redirect, to a measurement computer, a request to access a server application program hosted at a server computer in response to receiving the request from a client computer; the measurement computer comprises request rewriting logic configured to receive the request via redirection based on the CAM, to record a first time value representing a time of receiving the request, to forward the request to the server application, to receive a response from the server computer to the request, to rewrite a payload of the response by embedding a browser-executable measurement reporting script into the payload, and to forward the rewritten response to the client; performance recording logic configured to receive a second time value from the client based on the client computer executing the measurement reporting script, and to store a performance record with the time values. | 03-03-2011 |
20110060876 | Exact Match Lookup Scheme - An exact match lookup system includes a hash function that generates a hash value in response to an input hash key. The hash value is used to retrieve a hash bucket index value from a hash bucket index table. The hash bucket index value is used to retrieve a plurality of hash keys from a plurality of hash bucket tables, in parallel. The retrieved hash keys are compared with the input hash key to identify a match. Hit logic generates an output index by concatenating the hash bucket index value with an address associated with the hash bucket table that provides the matching hash key. An exact match result is provided in response to the output index. A content addressable memory (CAM) may store hash keys that do not fit in the hash bucket tables. | 03-10-2011 |
20110072206 | Distributed content storage and retrieval - Distributed content storage and retrieval is disclosed. A set of features associated with a content object is determined. A storage location is selected to perform an operation with respect to the content object, from a plurality of storage locations comprising a distributed content storage system, based at least in part on probability data indicating a degree to which the selected storage location is associated statistically with a feature comprising the set of features determined to be associated with the content object. | 03-24-2011 |
20110113190 | SECONDARY STORAGE TO REDUCE COUNTER MEMORY IN FLOW TABLES - In one embodiment, a CAM overflow structure holds flow indices in a CAM and each CAM entry is associated with an overflow count value (OCV) entry holding an OCV. If counter in an primary flow-counter bank (PFCB) overflows when updated, the CAM is searched and, if the index of the counter that overflowed is stored in the associated OCV entry, the OCV is incremented. The counter values in the PFCB are scanned according to specified criteria and transferred to a secondary flow-counter bank (SFCB) held in non-custom system RAM. When a counter value is transferred to the SFCB the corresponding OCV is appended to the counter value. | 05-12-2011 |
20110113191 | PROGRAMMABLE INTELLIGENT SEARCH MEMORY - Memory architecture provides capabilities for high performance content search. The architecture creates an innovative memory that can be programmed with content search rules which are used by the memory to evaluate presented content for matching with the programmed rules. When the content being searched matches any of the rules programmed in the Programmable Intelligent Search Memory (PRISM) action(s) associated with the matched rule(s) are taken. Content search rules comprise of regular expressions which are converted to finite state automata and then programmed in PRISM for evaluating content with the search rules. | 05-12-2011 |
20110161580 | PROVIDING DYNAMIC DATABASES FOR A TCAM - A network device allocates a particular number of memory blocks in a ternary content-addressable memory (TCAM) of the network device to each database of multiple databases, and creates a list of additional memory blocks in an external TCAM of the network device. The network device also receives, by the external TCAM, a request for an additional memory block to provide one or more rules from one of the multiple databases, and allocates, by the external TCAM and to the requesting database, an additional memory block from the list of additional memory blocks. | 06-30-2011 |
20110173386 | TERNARY CONTENT ADDRESSABLE MEMORY EMBEDDED IN A CENTRAL PROCESSING UNIT - An arithmetic logic unit ( | 07-14-2011 |
20110197021 | Write-Through-Read (WTR) Comparator Circuits, Systems, and Methods Employing Write-Back Stage and Use of Same With A Multiple-Port File - Write-through-read (WTR) comparator circuits and related WTR processes and memory systems are disclosed. The WTR comparator circuits can be configured to perform WTR functions for a multiple port file having one or more read and write ports. One or more WTR comparators in the WTR comparator circuit are configured to compare a read index into a file with a write index corresponding to a write-back stage selected write port among a plurality of write ports that can write data to the entry in the file. The WTR comparators then generate a WTR comparator output indicating whether the write index matches the read index to control a WTR function. In this manner, the WTR comparator circuit can employ less WTR comparators than the number of read and write port combinations. Providing less WTR comparators can reduce power consumption, cost, and area required on a semiconductor die for the WTR comparator circuit. | 08-11-2011 |
20110219183 | SUB-AREA FCID ALLOCATION SCHEME - Certain embodiments of the present disclosure generally relate to allocating a sub-area of Fibre Channel addresses (FCIDs) to a device. A range of addresses may be assigned to the device using a mask address, where the most significant bits represent a mask and the least significant bits represent a sub-range of FCIDs available to be assigned to the device. Therefore, routing information may be stored efficiently in a Ternary Content Addressable Memory (TCAM) by storing a single entry in the TCAM for each sub-area of FCIDs allocated to a device, instead of storing an entry for each FCID. The single entry may indicate the mask address and the width of the mask. | 09-08-2011 |
20110238904 | ALIGNMENT OF INSTRUCTIONS AND REPLIES ACROSS MULTIPLE DEVICES IN A CASCADED SYSTEM, USING BUFFERS OF PROGRAMMABLE DEPTHS - Buffers of programmable depths are used in the instruction and reply paths of cascaded devices to account for possible differences in latencies between the devices. The buffers may be enabled or bypassed such that the alignment of instruction and result may be performed at the boundaries between separate groups of devices having different instruction latencies. | 09-29-2011 |
20110258375 | Method and Apparatus for In-Place Hold and Preservation Operation on Objects in Content Addressable Storage - A method and apparatus for performing a hold operation while keeping the data in place as the data is in a hold state. Such a method and apparatus substantially eliminates the need for a copy operation and thus provides advantages cost and management savings. The method and apparatus define a hold delete operation along with hold life points in a CAS system. | 10-20-2011 |
20110276752 | POWER EFFICIENT AND RULE MOVEMENT OPTIMIZED TCAM MANAGEMENT - A network device allocates a number of blocks of memory in a ternary content-addressable memory (TCAM) of the network device to each database of multiple databases, and assigns unused blocks of memory of the TCAM to a free pool. The network device also detects execution of a run mechanism by the TCAM, and allocates, based on the execution of the run mechanism, one of the unused blocks of memory to a filter or rule of one of the multiple databases. | 11-10-2011 |
20110283061 | UPDATING CAM ARRAYS USING PREFIX LENGTH DISTRIBUTION PREDICTION - A method and apparatus for ordering a plurality (P) of entries having various prefix lengths for storage in a number (T) of available storage locations in a content addressable memory (CAM) array according to the prefix lengths is disclosed. Initially, a first number (N) of the entries of selected and used to generate a distribution graph of their prefix lengths. Then, for each unique prefix length, a corresponding subset of the T storage locations in the CAM array are allocated according to a predicted prefix length distribution indicated by the distribution graph. Then, all of the entries are stored in the corresponding allocated storage locations according to prefix length. | 11-17-2011 |
20110307655 | METHOD AND APPARATUS FOR UPDATING TABLE ENTRIES OF A TERNARY CONTENT ADDRESSABLE MEMORY - A method and an apparatus for updating table entries of a TCAM are disclosed. The method comprises: creating a virtual TCAM list, of which respective first TCAM table entries are one-to-one corresponding to respective second TCAM table entries stored in a hardware TCAM; determining, in idle resources of the hardware TCAM, a storage position of a second TCAM table entry to be updated corresponding to a first TCAM table entry to be updated, according to a pre-specified precedence relationship between the storage positions of the first TCAM table entry to be updated and other first TCAM table entry in the virtual TCAM list; and performing an updating operation on the second TCAM table entry to be updated based on the determined storage position. According to the present invention, the storage position of the second TCAM table entry to be updated is selected from the idle resources of the hardware TCAM so far as possible, and thus the problem of a low efficiency in updating table entries because of the rewriting of a lot of other second TCAM table entries caused by updating the second TCAM table entries in the hardware TCAM is overcome. | 12-15-2011 |
20110307656 | EFFICIENT LOOKUP METHODS FOR TERNARY CONTENT ADDRESSABLE MEMORY AND ASSOCIATED DEVICES AND SYSTEMS - Lookup techniques are described, which can achieve improvements in energy efficiency, speed, and cost, of IP address lookup, for example, in devices and systems employing ternary content addressable memory (TCAM). The disclosed subject matter describes dividing a route table into several sub-tries with disjoint range boundaries. In addition, the disclosed subject matter describes storing sub-tries of a route table between a TCAM and a faster and less costly memory. The disclosed details enable various refinements and modifications according to system design and tradeoff considerations. | 12-15-2011 |
20110314215 | MULTI-PRIORITY ENCODER - A multi-priority encoder includes a plurality of interconnected, single-priority encoders arranged in descending priority order. The multi-priority encoder includes circuitry for blocking a match output by a lower level single-priority encoder if a higher level single-priority encoder outputs a match output Match data is received from a content addressable memory, and the priority encoder includes address encoding circuitry for outputting the address locations of each highest priority match line flagged by the highest priority indicator. Each single-priority encoder includes a highest priority indicator which has a plurality of indicator segments, each indicator segment being associated with a match line input. | 12-22-2011 |
20110320703 | ASSOCIATING INPUT/OUTPUT DEVICE REQUESTS WITH MEMORY ASSOCIATED WITH A LOGICAL PARTITION - An address controller includes a bit selector that receives a first portion of a requester id and selects a bit from a vector that identifies whether a requesting function is an SR-IOV device or a standard PCIe device. The controller also includes a selector coupled to the bit selector that forms an output comprised of either a second portion of the RID or a first portion of the address portion based on an input received from the selector and an address control unit that receives the first portion of the RID and the output and determines the LPAR that owns the requesting function based thereon, the address control unit providing the corrected memory request to the memory. | 12-29-2011 |
20110320704 | CONTENT ADDRESSABLE MEMORY SYSTEM - A content addressable memory system, method and computer program product is described. The memory system comprises a location addressable store having data identified by location and multiple levels of content addressable stores each holding ternary content words. The content words are associated with references to data in the location addressable store. The content store levels might be implemented using different technologies that have different performance, capacity, and cost attributes. The memory system includes a content based cache for improved performance and a content addressable memory management unit for managing memory access operations and virtual memory addressing. | 12-29-2011 |
20110320705 | METHOD FOR TCAM LOOKUP IN MULTI-THREADED PACKET PROCESSORS - A method, apparatus and computer program product for performing TCAM lookups in multi-threaded packet processors is presented. A Ternary Content Addressable Memory (TCAM) key is constructed for a packet and a Packet Reference Number (PRN) is generated. The TCAM key and the packet are tagged with the PRN. The TCAM key and the PRN are sent to a TCAM and in parallel the packet and the PRN are sent to a packet processing thread. The PRN is used to read the TCAM result when it is ready. | 12-29-2011 |
20120030421 | MAINTAINING STATES FOR THE REQUEST QUEUE OF A HARDWARE ACCELERATOR - The invention discloses a method and system of maintaining states for the request queue of a hardware accelerator, wherein the request queue stores therein at least one Coprocessor Request Block (CRB) to be input into the hardware accelerator, the method comprising: receiving, in response to a CRB specified by the request queue is about to enter the hardware accelerator, the state pointer of the specified CRB; acquiring physical storage locations of other CRBs in the request queue that are stored in the request queue and are the same as the state pointer of the specified CRB; controlling the input of the specified CRB and the state information required for processing the specified CRB into a hardware buffer; receiving the state information of the specified CRB that has been processed in the hardware accelerator; if the above physical storage locations are not vacant, then making physical storage locations that are closest on the request queue of the specified CRB as the selected location and storing the received state information in the selected location of the state buffer. | 02-02-2012 |
20120036317 | STORAGE SYSTEM AND STORAGE ACCESS METHOD AND PROGRAM - A system has a data structure in which a value can be obtained from a key. In a write access, a first pair and a second pair are stored respectively in a volatile storage device. The first pair is saved in a nonvolatile storage device before returning a response, and the second pair is saved in the first storage device at any time with the second pair saved in the volatile storage device. In a read access in which a value is obtained from a key, it is determined that data is not stored normally if the second pair is not found in processing in which after obtaining the hash value of the value from the first pair, the second pair is read. | 02-09-2012 |
20120096219 | CONTENT ADDRESSABLE STORAGE WITH REDUCED LATENCY - A system and method for storing data in a content-addressable system is provided. The system includes a content-addressable storage system and a persistent cache. The persistent cache includes a temporary address generator that is configured to generate a temporary address which is associated with data to be stored in the persistent cache, and a non-content-addressable storage system configured to store and retrieve data in the persistent cache using the temporary address. The persistent cache further comprises an address translator configured to map a temporary address associated with the data in the non-content addressable storage system with a content address associated with the data in the content-addressable storage system. | 04-19-2012 |
20120096220 | BIT WEAVING TECHNIQUE FOR COMPRESSING PACKET CLASSIFIERS - An improved technique is provided for compressing a packet classifier for a computer network system. A set of packet classification rules is first partitioned into one or more partitions. For each partition, columns of bits in each of the ternary strings of a given partition are reordered, the ternary strings within each partition are consolidated into one or more replacement strings and then the columns of bits of the replacement strings are rearranged back to the starting order. The rearranged replacement strings from each of the partitions are appended together to form a compressed packet classifier which may be instantiated in a content-addressable memory device. | 04-19-2012 |
20120096221 | HIERARCHICAL IMMUTABLE CONTENT-ADDRESSABLE MEMORY PROCESSOR - Improved memory management is provided according to a Hierarchical Immutable Content Addressable Memory Processor (HICAMP) architecture. In HICAMP, physical memory is organized as two or more physical memory blocks, each physical memory block having a fixed storage capacity. An indication of which of the physical memory blocks is active at any point in time is provided. A memory controller provides a non-duplicating write capability, where data to be written to the physical memory is compared to contents of all active physical memory blocks at the time of writing, to ensure that no two active memory blocks have the same data after completion of the non-duplicating write. | 04-19-2012 |
20120110256 | LOW POWER CONTENT-ADDRESSABLE MEMORY AND METHOD - Content-Addressable Memory (CAM) arrays and related circuitry for integrated circuits and CAM array comparison methods are provided such that relatively low power is used in the operation of the CAM circuitry. A binary value pair is stored in a pair of CAM memory elements. A comparison signal is provided to comparator circuitry that uniquely represents the stored binary values. A match signal is input to the comparator circuitry that uniquely represents a binary value pair to be compared with the stored binary value pair. In one example, a transistor is operated to output a positive match result signal only on a condition that the comparison signal provided to the comparator circuitry and match signal input to the comparator circuitry represent the same binary value pair. In that example, no transistor of the comparator circuitry is operated when the comparison signal provided to the comparator circuitry and match signal input to the comparator circuitry represent different binary value pairs. | 05-03-2012 |
20120117319 | LOW POWER, HASH-CONTENT ADDRESSABLE MEMORY ARCHITECTURE - A method is comprised of inputting a comparand word to a plurality of hash circuits, each hash circuit being responsive to a different portion of the comparand word. The hash circuits output a hash signal which is used to enable or precharge portions of a CAM. The comparand word is also input to the CAM. The CAM compares the comparand word in the precharged portions of the CAM and outputs information responsive to the comparing step. When used to process Internet addresses, the information output may be port information or an index from which port information may be located. A circuit is also disclosed as is a method of initializing the circuit. | 05-10-2012 |
20120124282 | SCALABLE BLOCK DATA STORAGE USING CONTENT ADDRESSING - A device for scalable block data storage and retrieval uses content addressing. Data storage devices store data blocks, and are connected over a network to computing modules. The modules comprise control modules and data modules and carry out content addressing for both storage and retrieval. The network defines separate control paths via the control modules and data paths via the data modules. | 05-17-2012 |
20120124283 | SEARCHING A CONTENT ADDRESSABLE MEMORY - A method includes searching a content addressable memory based on a comparand. The comparand includes a collection of bits. A modified comparand is generated by modifying the comparand. The modified comparand is based at least in part on a comparand overlay data value. The content addressable memory is also searched with the modified comparand. | 05-17-2012 |
20120210056 | CACHE MEMORY AND CONTROL METHOD THEREOF - A cache memory includes a CAM with an associativity of n (where n is a natural number) and an SRAM, and storing or reading out corresponding data when a tag address is specified by a CPU connected to the cache memory, the tag address constituted by a first sub-tag address and a second sub-tag address. The cache memory classifies the data, according to the time at which a read request has been made, into at least a first generation which corresponds to a read request made at a recent time and a second generation which corresponds to a read request made at a time which is different from the recent time. The first sub-tag address is managed by the CAM. The second sub-tag address is managed by the SRAM. The cache memory allows a plurality of second sub-tag addresses to be associated with a same first sub-tag address. | 08-16-2012 |
20120215976 | CONTENT ADDRESSABLE MEMORY - The present invention is directed to reduce array area and power consumption in a content addressable memory. A comparator for performing a match determination and a size determination is provided commonly for plural entries each storing data to be retrieved. Each entry includes data storage cells for storing data and mask cells for storing mask bits. The number of mask cells is smaller than that of the data storage cells. Search data is transmitted to the comparator via a search data bus. One of the entries is selected according to a predetermined rule. The comparator decodes the mask bits, generates a mask instruction signal, and performs match comparison and size comparison between the search data and data to be retrieved which is stored in the selected entry. | 08-23-2012 |
20120233396 | APPARATUS, SYSTEM, AND METHOD FOR EFFICIENT MAPPING OF VIRTUAL AND PHYSICAL ADDRESSES - An apparatus, system, and method are disclosed for efficiently mapping virtual and physical addresses. A forward mapping module uses a forward map to identify physical addresses of data of a data segment from a virtual address. The data segment is identified in a storage request. The virtual addresses include discrete addresses within a virtual address space where the virtual addresses sparsely populate the virtual address space. A reverse mapping module uses a reverse map to determine a virtual address of a data segment from a physical address. The reverse map maps the data storage device into erase regions such that a portion of the reverse map spans an erase region of the data storage device erased together during a storage space recovery operation. A storage space recovery module uses the reverse map to identify valid data in an erase region prior to an operation to recover the erase region. | 09-13-2012 |
20120265931 | HIERARCHICAL IMMUTABLE CONTENT-ADDRESSABLE MEMORY PROCESSOR - Improved memory management is provided according to a Hierarchical Immutable Content Addressable Memory Processor (HICAMP) architecture. In HICAMP, physical memory is organized as two or more physical memory blocks, each physical memory block having a fixed storage capacity. An indication of which of the physical memory blocks is active at any point in time is provided. A memory controller provides a non-duplicating write capability, where data to be written to the physical memory is compared to contents of all active physical memory blocks at the time of writing, to ensure that no two active memory blocks have the same data after completion of the non-duplicating write. | 10-18-2012 |
20120290782 | Block Mapping Circuit and Method for Memory Device - A method of mapping logical block select signals to physical blocks can include receiving at least one signal for each of n+1 logical blocks, where n is an integer greater than one, that each map to one of m+1 physical blocks, where n11-15-2012 | |
20120317353 | REPLICATION TECHNIQUES WITH CONTENT ADDRESSABLE STORAGE - A CAS data storage system with one or more source CAS data storage spaces and one or more destination CAS data storage spaces, and a communication line therebetween, receives input data at the source storage space for local storage and for replication to the destination CAS storage space. CAS metadata is used in the replication procedure between the two separate CAS storage spaces. Thus, data at the source storage space is used to form an active buffer for transfer to the destination storage space, the active buffer holding a hash result of the respective data item and a storage address. The system detects whenever there is more than one data item in said active buffer sharing a same storage address and upon such detection transfers a respective hash result of only the last of the data items. | 12-13-2012 |
20120324157 | SYSTEMS AND METHODS FOR UTILIZING AN EXTENDED TRANSLATION LOOK-ASIDE BUFFER HAVING A HYBRID MEMORY STRUCTURE - Extended translation look-aside buffers (eTLB) for converting virtual addresses into physical addresses are presented, the eTLB including, a physical memory address storage having a number of physical addresses, a virtual memory address storage configured to store a number of virtual memory addresses corresponding with the physical addresses, the virtual memory address storage including, a set associative memory structure (SAM), and a content addressable memory (CAM) structure; and comparison circuitry for determining whether a requested address is present in the virtual memory address storage, wherein the eTLB is configured to receive an index register for identifying the SAM structure and the CAM structure, and wherein the eTLB is configured to receive an entry register for providing a virtual page number corresponding with the plurality of virtual memory addresses. | 12-20-2012 |
20120324158 | CONTENT ADDRESSABLE MEMORY (CAM) DEVICE AND METHOD FOR UPDATING DATA - A content addressable memory (CAM) ( | 12-20-2012 |
20130007358 | REPLICATING TAG ENTRIES FOR RELIABILITY ENHANCEMENT IN CACHE TAG ARRAYS - Technologies are generally described for exploiting program phase behavior to duplicate most recently and/or frequently accessed tag entries in a Tag Replication Buffer (TRB) to protect the information integrity of tag arrays in a processor cache. The reliability/effectiveness of microprocessor cache performance may be further improved by capturing/duplicating tags of dirty cache lines, exploiting the fact that detected error-corrupted clean cache lines can be recovered by L2 cache. A deterministic TRB replacement triggered early write-back scheme may provide full duplication and recovery of single-bit errors for tags of dirty cache lines. | 01-03-2013 |
20130042060 | MEMORY SYSTEM INCLUDING KEY-VALUE STORE - According to one embodiment, a memory system including a key-value store containing key-value data as a pair of a key and a value corresponding to the key, includes an interface, a memory block, an address acquisition circuit and a controller. The interface receives a data write/read request or a request based on the key-value store. The memory block has a data area for storing data and a metadata table containing the key-value data. The address acquisition circuit acquires an address in response to input of the key. The controller executes the data write/read request for the memory block, and outputs the address acquired to the memory block and executes the request based on the key-value store. The controller outputs the value corresponding to the key via the interface. | 02-14-2013 |
20130046927 | Memory Management Unit Tag Memory with CAM Evaluate Signal - A method and data processing system for accessing an entry in a memory array by placing a tag memory unit ( | 02-21-2013 |
20130046928 | Memory Management Unit Tag Memory - A method and data processing system for accessing an entry in a memory array by placing a tag memory unit ( | 02-21-2013 |
20130046929 | INTERFACE MODULE, COMMUNICATION APPARATUS, AND COMMUNICATION METHOD - An interface module includes ports; a first memory that stores identifiers indicating processing operations for data blocks associating with the ports; a content-addressable memory that stores keys, each including at least one port and one identifier; a second memory that stores processing information associated with the keys and indicating processing operations for data blocks; an action code circuit that, when a data block has been received, obtains, from the first memory, an identifier set for a port that has received the data block; a generation circuit that generates a key from the port that has received the data block and the identifier obtained by the action code circuit; and a judgment circuit that judges how to process the received data block in accordance with a piece of the processing information associated with the generated key obtained by searching the content-addressable memory using the key generated by the generation circuit. | 02-21-2013 |
20130054886 | CONTENT ADDRESSABLE MEMORY (CAM) - A non-volatile Content Addressable Memory element including a non volatile memristor memory element; a data bus for applying a data signal to be programmed into the memristor memory element; a search bus for applying a search term; an output or match bus; logic to selectively enable the search bus and the data bus; wherein the logic is configurable to set the logic state of the memristor according to a logic signal applied to the data bus, and configurable to enable the logic state of the memristor to be compared to a logic state on the search bus with the match bus signaling a true logic state upon matching. | 02-28-2013 |
20130091325 | Methods and Apparatus Providing High-Speed Content Addressable Memory (CAM) Search-Invalidates - Embodiments of a Content Addressable Memory (CAM) enabling high-speed search and invalidate operations and methods of operation thereof are disclosed. In one embodiment, the CAM includes a CAM cell array including a number of CAM cells and a valid bit cell configured to generate a match indicator, and blocking circuitry configured to block an output of the valid bit cell from altering the match indicator during an invalidate process of a search and invalidate operation. Preferably, the output of the valid bit cell is blocked from affecting the match indicator for the CAM cell array beginning at a start of the invalidate process and continuing until an end of the search and invalidate operation. | 04-11-2013 |
20130124796 | STORAGE METHOD AND APPARATUS WHICH ARE BASED ON DATA CONTENT IDENTIFICATION - The embodiments of the present invention provide a storage method and a storage apparatus which are based on data content identification. Through the storage method and the storage apparatus which are based on data content identification and provided in the embodiments of the present invention, the data from the host is received, the content of the data is scanned to obtain format characteristics of the data, and the characteristics are matched with format characteristics in a content characteristic base to determine attributes of the data, and the data is sorted and stored according to the data attributes, so that a storage device can obtain attributes of the data to be stored and optimize the data, which improves data storage performance of the storage device. | 05-16-2013 |
20130159618 | NEAREST NEIGHBOR SERIAL CONTENT ADDRESSABLE MEMORY - A digital design and technique may be used to implement a Manhattan Nearest Neighbor content addressable memory function by augmenting a serial content addressable memory design with additional memory and counters for bit serially accumulating in parallel and subsequently comparing in parallel all the Manhattan distances between a serially inputted vector and all corresponding vectors resident in the CAM. Other distance measures, besides a Manhattan distance, may optionally be used in conjunction with similar techniques and designs. | 06-20-2013 |
20130198445 | SEMICONDUCTOR MEMORY DEVICE AND INFORMATION PROCESSING DEVICE - According to one embodiment, a semiconductor memory device includes a memory and a controller. The memory stores data pieces and search information including entries, where each entry is associated with a search key for specifying one data piece and a real address at which the data piece is stored. Upon reception of a first command, the controller, when the first command specifies a search key, outputs one data piece corresponding to one entry which includes the search key, and when the first command specifies one real address, outputs one data piece corresponding to one entry including the real address. | 08-01-2013 |
20130246698 | Hybrid Memory for Search Operations - Methods, systems, and computer readable storage medium embodiments for configuring a lookup table for a network device are disclosed. Aspects in these embodiments include generating a decision tree based upon bit representations of respective data entries from a plurality of data entries where one or more of the plurality of data entries are represented at respective nodes of the decision tree, storing a first bit pattern corresponding to a selected node from the decision tree in a content addressable memory (CAM) at a location associated with an index, and storing one or more second bit patterns at an address in a second memory. The one or more second hit patterns correspond to the one or more data entries represented at the selected node, and the address is associated with the index. Embodiments also include searching a lookup table in a network device. | 09-19-2013 |
20130268729 | SCALABLE PACKET CLASSIFICATION USING ASSOCIATIVE MEMORY - Techniques for forming and using multi-space associative memory units are disclosed. One example method for retrieving classification rules for data objects begins with the retrieval of a first action for the data object by performing a first lookup in a first associative memory space in a memory unit, using a first key formed from the data object. A second action for the data object is retrieved by performing a second lookup in a second associative memory space, using a second key formed from the data object. The lookups are performed simultaneously, in some embodiments, or serially, in others. In some embodiments, the second lookup is performed after the first, in response to an information element retrieved from the first lookup, the information element indicating that an additional associative memory lookup is needed. A final action for the data object is determined from the results of the first and second lookups. | 10-10-2013 |
20130290622 | TCAM ACTION UPDATES - Systems, and methods, including executable instructions and/or logic thereon are provided for ternary content addressable memory (TCAM) updates. A TCAM system includes a TCAM matching array, a TCAM action array that specifies actions that are taken upon a match in the TCAM array, and a TCAM driver that provides a programmable interface to the TCAM matching array and the TCAM action array. Program instructions are executed by the TCAM driver to add a divert object which encompasses actions associated with the TCAM actions array and to apply the divert object to update action fields in the TCAM action array, without changing the relative order of entries in the TCAM matching array, while hardware is simultaneously using the entries. | 10-31-2013 |
20130297866 | Smart Zoning Using Device Alias Database - Systems and methods are disclosed to implement smart zoning using device alias database that preserves TCAM space. Embodiments may consider device types to save an administrator's efforts from splitting application specific zones into two-member (initiator and target) zones. | 11-07-2013 |
20130304983 | DYNAMIC ALLOCATION OF RECORDS TO CLUSTERS IN A TERNARY CONTENT ADDRESSABLE MEMORY - Embodiments of the invention are directed to a TCAM for longest prefix matching in a routing system. The TCAM comprises a plurality of records of which a portion are configured into one or more address clusters each such cluster corresponding to a respective IP address prefix length and another portion of which are configured into a free cluster not corresponding to any IP address prefix length. | 11-14-2013 |
20130326133 | LOCAL CACHING DEVICE, SYSTEM AND METHOD FOR PROVIDING CONTENT CACHING SERVICE - The present disclosure relates to a local caching device, system and method for providing a content caching service. The local caching device receives, from a content provider, at least one part of content requested by a user terminal and then, based on the received part of the requested content, determines whether the requested content is stored in a storage unit. If the requested content is stored, the local caching device registers flow information of the requested content in the storage unit. When content having the same flow information as the registered flow information is requested, the local caching device determines based on content address information whether the requested content is stored. | 12-05-2013 |
20130332670 | PROCESS IDENTIFIER-BASED CACHE DATA TRANSFER - Embodiments of the invention relate to process identifier (PID) based cache information transfer. An aspect of the invention includes sending, by a first core of a processor, a PID associated with a cache miss in a first local cache of the first core to a second cache of the processor. Another aspect of the invention includes determining that the PID associated with the cache miss is listed in a PID table of the second cache. Yet another aspect of the invention includes based on the PID being listed in the PID table of the second cache, determining a plurality of entries in a cache directory of the second cache that are associated with the PID. Yet another aspect of the invention includes pushing cache information associated with each of the determined plurality of entries in the cache directory from the second cache to the first local cache. | 12-12-2013 |
20130332671 | Creating Optimal Comparison Criterion within Associative Memories - A system including an associative memory including a plurality of data and a plurality of associations among the plurality of data. The plurality of data is collected into associated groups. The associative memory is configured to be queried based on at least indirect relationships among the plurality of data. The system also includes an input device in communication with the associative memory, the input device configured to receive an input criteria. The system also includes an optimizer in communication with the input device and the associative memory. The optimizer is configured to generate, using the associative memory, a multi-dimensional criteria file from the input criteria. The optimizer converts the input criteria to numerical representations associated with expert weights and generates the multi-dimensional criteria file to include an optimized plurality of criteria relevant to the input criteria. | 12-12-2013 |
20130332672 | PROCESS IDENTIFIER-BASED CACHE INFORMATION TRANSFER - Embodiments of the invention relate to process identifier (PID) based cache information transfer. An aspect of the invention includes sending, by a first core of a processor, a PID associated with a cache miss in a first local cache of the first core to a second cache of the processor. Another aspect of the invention includes determining that the PID associated with the cache miss is listed in a PID table of the second cache. Yet another aspect of the invention includes based on the PID being listed in the PID table of the second cache, determining a plurality of entries in a cache directory of the second cache that are associated with the PID. Yet another aspect of the invention includes pushing cache information associated with each of the determined plurality of entries in the cache directory from the second cache to the first local cache. | 12-12-2013 |
20130339596 | CACHE SET SELECTIVE POWER UP - Embodiments of the disclosure include selectively powering up a cache set of a multi-set associative cache by receiving an instruction fetch address and determining that the instruction fetch address corresponds to one of a plurality of entries of a content addressable memory. Based on determining that the instruction fetch address corresponds to one of a plurality of entries of a content addressable memory a cache set of the multi-set associative cache that contains a cache line referenced by the instruction fetch address is identified and only powering up a subset of cache. Based on the identified cache set not being powered up, selectively powering up the identified cache set of the multi-set associative cache and transmitting one or more instructions stored in the cache line referenced by the instruction fetch address to a processor. | 12-19-2013 |
20130339597 | METHODS AND APPARATUS PROVIDING HIGH-SPEED CONTENT ADDRESSABLE MEMORY (CAM) SEARCH-INVALIDATES - Embodiments of a Content Addressable Memory (CAM) enabling high-speed search and invalidate operations and methods of operation thereof are disclosed. In one embodiment, the CAM includes a CAM cell array including a number of CAM cells and a valid bit cell configured to generate a match indicator, and blocking circuitry configured to block an output of the valid bit cell from altering the match indicator during an invalidate process of a search and invalidate operation. Preferably, the output of the valid bit cell is blocked from affecting the match indicator for the CAM cell array beginning at a start of the invalidate process and continuing until an end of the search and invalidate operation. | 12-19-2013 |
20140006706 | Ternary Content-Addressable Memory Assisted Packet Classification | 01-02-2014 |
20140025881 | SELF-RECONFIGURABLE ADDRESS DECODER FOR ASSOCIATIVE INDEX EXTENDED CACHES - Associative index extended (AIX) caches can be functionally implemented through a reconfigurable decoder that employs programmable line decoding. The reconfigurable decoder features scalability in the number of lines, the number of index extension bits, and the number of banks. The reconfigurable decoder can switch between pure direct mapped (DM) mode and direct mapped-associative index extended (DM-AIX) mode of operation. For banked configurations, the reconfigurable decoder provides the ability to run some banks in DM mode and some other banks in DM-AIX mode. A cache employing this reconfigurable decoder can provide a comparable level of latency as a DM cache with minimal modifications to a DM cache circuitry of an additional logic circuit on a critical signal path, while providing low power operation at low area overhead with SA cache-like miss rates. Address masking and most-recently-used-save replacement policy can be employed with a single bit overhead per line. | 01-23-2014 |
20140025882 | TRANSMISSION DEVICE AND TEMPERATURE CONTROL METHOD - There is provided a transmission device including an associative memory in which, when data is specified, contents of the memory are searched for the data and an address of a location in which the data has been found is read out; a detector configured to detect an access rate to the associative memory; an estimation unit configured to estimate a temperature of the associative memory, based on the access rate to the associative memory; a prediction unit configured to predict a time period until the temperature of the associative memory reaches a specified temperature, based on the temperature estimated by the estimation unit; and an access controller configured to control an access to the associative memory, based on the time period predicted by the prediction unit. | 01-23-2014 |
20140025883 | STORAGE OF A DESIRED ADDRESS IN A DEVICE OF A CONTROL SYSTEM - A method, apparatus, program and system is provided for storing a desired address in a device of a control system in which at least one device of a first type and one or more devices of a second type are connected to one another via a communication medium for the purpose of interchanging data. An index value and a unique address of a device of the second type are associated to one another and stored in the device of the first type. The association can be automatically retrieved when a device of the second type is replaced with a replacement device, and the new address of the replacement device can be automatically determined based on the association. | 01-23-2014 |
20140032831 | MULTI-UPDATABLE LEAST RECENTLY USED MECHANISM - A control unit of a least recently used (LRU) mechanism for a ternary content addressable memory (TCAM) stores counts indicating a time sequence with resources in entries of the TCAM. The control unit receives an access request with a mask defining related resources. The TCAM is searched to find partial matches based on the mask. The control unit increases the counts for entries corresponding to partial matches, preserving an order of the counts. If the control unit also finds an exact match, its count is updated to be greater than the other increased counts. After each access request, the control unit searches the TCAM to find the entry having the lowest count, and writes the resource of that entry to an LRU register. In this manner, the system software can instantly identify the LRU entry by reading the value in the LRU register. | 01-30-2014 |
20140032832 | TCAM EXTENDED SEARCH FUNCTION - An apparatus includes a range determination module that determines a search range of TCAM content values and a search criteria module that creates a TCAM search value from a search range by combining common higher order bits with don't care lower order bits that change within the search range. A match module searches the TCAM using the search value to determine a match count. A division module creates upper and lower sub-ranges by creating upper and lower midpoint content values within the search range. The upper sub-range is between an upper content value and the upper midpoint content value and the lower sub-range is between the lower midpoint content value and a lower content value. The upper midpoint content value includes changing a most significant don't care bit to a 1 and remaining don't care bits to 0. The lower midpoint content value includes changing a most significant don't care bit to 0 and remaining don't care bits to 1. | 01-30-2014 |
20140059288 | Batch Entries Sharing Individual Content-addressable Memory Entries - In one embodiment, batch entries include multiple content-addressable memory (CAM) entries, and CAM entries are allowed to be shared among different batch entries. For example, two or more batch entries might have a common set of bits (e.g., representing an address, an address prefix, etc.). Rather than consuming bits of multiple CAM entries, a single CAM entry can be programmed with this common information. Other CAM entries associated with different batch entries are programmed with the distinguishing/different values. A batch lookup operation on a batch entry of two or more CAM entries requires multiple lookup operations on the CAM entries. One embodiment uses a batch mask vector to provide information to decode what CAM entries are shared among which batch entries during a series of lookup operations, which can be performed in one or both directions through the CAM entries. | 02-27-2014 |
20140059289 | Content-addressable Memory Lookup Device Supporting Iterative Lookup Operations - In one embodiment, multiple content-addressable memory entries are associated with each other to effectively form a batch content-addressable memory entry that spans multiple physical entries of the content-addressable memory device. To match against this content-addressable memory entry, multiple lookup operations are required—i.e., one lookup operation for each combined physical entry. Further, one embodiment provides that a batch content-addressable memory entry can span one, two, three, or more physical content-addressable memory entries, and batch content-addressable memory entries of varying sizes could be programmed into a single content-addressable memory device. Thus, a lookup operation might take two lookup iterations on the physical entries of the content-addressable memory device, with a next lookup operation taking a different number of lookup iterations (e.g., one, three or more). | 02-27-2014 |
20140068173 | CONTENT ADDRESSABLE MEMORY SCHEDULING - A digital system may utilize a serial content-addressable memory (CAM), capable of performing greater than, less than and/or equal comparisons between its contents and serially inputted data records according to a type of each data record, to select software routine addresses and associated parameters. The system may also include a scheduler, which may select one or more available processors to execute the software routines on the data records. | 03-06-2014 |
20140068174 | TRANSACTIONAL MEMORY THAT PERFORMS A CAMR 32-BIT LOOKUP OPERATION - A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a base address, a starting bit position, and a mask size. In response to the command, the TM pulls an input value (IV). A selecting circuit within the TM uses the starting bit position and the mask size to select a first portion of the IV. The first portion of the IV and the base address value are summed to generate a memory address. The memory address is used to read a word containing multiple result values and multiple reference values from memory. A second portion of the IV is compared with each reference value using a comparator circuit. A result value associated with the matching reference value is selected using a multiplexing circuit and a select value generated by the comparator circuit. The TM sends the selected result value to the processor. | 03-06-2014 |
20140068175 | OLDEST OPERATION TRANSLATION LOOK-ASIDE BUFFER - A method is provided for dispatching a load operation to a processing device and determining that the operation is the oldest load operation. The method also includes executing the operation in response to determining the operation is the oldest load operation. Computer readable storage media for performing the method are also provided. An apparatus is provided that includes a translation look-aside buffer (TLB) content addressable memory (CAM), and includes an oldest operation storage buffer operationally coupled to the TLB CAM. The apparatus also includes an output multiplexor operationally coupled to the TLB CAM and to the oldest operation storage buffer. Computer readable storage media for adapting a fabrication facility to manufacture the apparatus are also provided. | 03-06-2014 |
20140068176 | LOOKUP ENGINE WITH PIPELINED ACCESS, SPECULATIVE ADD AND LOCK-IN-HIT FUNCTION - Described embodiments provide a lookup engine that receives lookup requests including a requested key and a speculative add requestor. Iteratively, for each one of the lookup requests, the lookup engine searches each entry of a lookup table for an entry having a key matching the requested key of the lookup request. If the lookup table does not include an entry having a key matching the requested key, the lookup engine sends a miss indication corresponding to the lookup request to the control processor. If the speculative add requestor is set, the lookup engine speculatively adds the requested key to a free entry in the lookup table. Speculatively added keys are searchable in the lookup table for subsequent lookup requests to maintain coherency of the lookup table without creating duplicate key entries, comparing missed keys with each other or stalling the lookup engine to insert missed keys. | 03-06-2014 |
20140068177 | ENHANCED MEMORY SAVINGS IN ROUTING MEMORY STRUCTURES OF SERIAL ATTACHED SCSI EXPANDERS - Methods and structure are provided for representing ports of a Serial Attached SCSI (SAS) expander circuit within routing memory. The SAS expander includes a plurality of PHYs and a routing memory. The routing memory includes entries that each indicate a set of PHYs available for initiating a connection with a SAS address, and also includes an entry that represents a SAS port with a start tag indicating a first PHY of the port and a length tag indicating a number of PHYs in the port. The SAS expander also includes a Content Addressable Memory (CAM) including entries that each associate a SAS address with an entry in the routing memory. Further, the SAS expander includes a controller that receives a request for a SAS address, uses the CAM to determine a corresponding routing memory entry for the requested SAS address, and selects the port indicated by the corresponding routing memory entry. | 03-06-2014 |
20140075108 | EFFICIENT TCAM RESOURCE SHARING - Various systems and methods for implementing efficient TCAM resource sharing are described herein. Entries are allocated across a plurality of ternary content addressable memories (TCAMs), with the plurality of TCAMs including a primary TCAM and a secondary TCAM, where the entries are allocated by sequentially accessing a plurality of groups of value-mask-result (VMR) entries, with each group having at least one VMR entry associated with the group, and iteratively analyzing the VMR entries associated with each group to determine a result set of VMR entries, with the result set being a subset of VMR entries from the plurality of groups of VMR entries, and the result set to be stored in the primary TCAM. | 03-13-2014 |
20140082273 | CONTENT ADDRESSABLE STORAGE IN LEGACY SYSTEMS - A CAS data storage system replicates data on a non-CAS storage device. The CAS storage device recognizes duplicate data and stores the data only once, whereas the non-CAS device does not recognize duplication of data and requires full storage of the data. The CAS data storage device saves on redundant data transfer by transferring, in the case of duplicate data, the address of a primary location at which the data is stored and the address of the current duplication. The CAS data storage system includes a hash→address table for this purpose. The non-CAS storage device then copies its own data from the primary location into the current location. | 03-20-2014 |
20140089578 | MULTI-UPDATABLE LEAST RECENTLY USED MECHANISM - A control unit of a least recently used (LRU) mechanism for a ternary content addressable memory (TCAM) stores counts indicating a time sequence with resources in entries of the TCAM. The control unit receives an access request with a mask defining related resources. The TCAM is searched to find partial matches based on the mask. The control unit increases the counts for entries corresponding to partial matches, preserving an order of the counts. If the control unit also finds an exact match, its count is updated to be greater than the other increased counts. After each access request, the control unit searches the TCAM to find the entry having the lowest count, and writes the resource of that entry to an LRU register. In this manner, the system software can instantly identify the LRU entry by reading the value in the LRU register. | 03-27-2014 |
20140095782 | METHOD AND SYSTEM FOR USING RANGE BITMAPS IN TCAM ACCESS - Various exemplary embodiments relate to a method and related network node including one or more of the following: determining that a first search value is associated with a first range field; determining a first bitmap associated with the first search value, wherein the first bitmap indicates at least one range encompassing the first search value; generating a search key based on the first bitmap; and accessing the ternary content addressable memory based on the search key. | 04-03-2014 |
20140095783 | PHYSICAL AND LOGICAL COUNTERS - Techniques for reducing a number of physical counters are provided. Logical counters may be associated with physical counters. The number of logical counters may be less than the number of physical counters. It may be determined if an association of a logical counter to a physical counter exists already. If not, a new association may be created. The physical counter associated with the logical counter may then be updated. | 04-03-2014 |
20140095784 | Techniques for Utilizing Transaction Lookaside Buffer Entry Numbers to Improve Processor Performance - A technique for operating a processor includes translating, using an associated transaction lookaside buffer, a first virtual address into a first physical address through a first entry number in the transaction lookaside buffer. The technique also includes translating, using the transaction lookaside buffer, a second virtual address into a second physical address through a second entry number in the translation lookaside buffer. The technique further includes, in response to the first entry number being the same as the second entry number, determining that the first and second virtual addresses point to the same physical address in memory and reference the same data. | 04-03-2014 |
20140095785 | Content Aware Block Power Savings - A memory architecture power savings system includes a first memory module configured to provide data corresponding to a stored address from among a plurality of stored addresses by comparing the plurality of stored addresses to a search key in response to a control signal. A second memory module is configured to store a plurality of data entries corresponding to truncated portions of the plurality of stored addresses, and to generate the control signal by comparing the plurality of data entries to a truncated portion of the search key. | 04-03-2014 |
20140108717 | System and Method to Backup Objects on an Object Storage Platform - A method and system enable tape back-up of objects stored to an object storage platform and also enable efficient backup to a secondary storage device data objects. An offline-replica bit within a metadata of an object being stored is set to a first value, indicating that the stored object is available for secondary storage to a second storage device. In response to receiving a request for backup of one or more objects from the object storage platform: the storage controller: identifies which objects have an offline-replica bit value that is the first value; and provides only those objects requested that have their offline-replica bit value equal to the first value. An external backup tracking mechanism identifies which objects have been backed-up to the secondary storage, and only those objects that have not previously been backed up are backed up during a subsequent backup request. | 04-17-2014 |
20140108718 | METHOD AND APPARATUS FOR SETTING TCAM ENTRY - The present invention discloses a method and an apparatus for setting a TCAM entry and relates to the field of communications, which are used to achieve an objective of improving utilization of a TCAM. The method for setting a TCAM entry includes: acquiring a number set formed by values of same fields in preset packets, where the packets are packets on which a same action needs to be performed, and the number set includes at least two numbers; acquiring a longest continuous mask of the number set; obtaining an acquisition result according to the longest continuous mask of the number set; and storing the acquisition result in a ternary content-addressable memory TCAM entry corresponding to the action. The solutions disclosed in the present invention are applicable to a scenario of setting a TCAM entry. | 04-17-2014 |
20140115249 | Parallel Execution Mechanism and Operating Method Thereof - A thread priority control mechanism is provided which uses the completion event of the preceding transaction to raise the priority of the next transaction in the order of execution when the transaction status has been changed from speculative to non-speculative. In one aspect of the present invention, a thread-level speculation mechanism is provided which has content-addressable memory, an address register and a comparator for recording transaction footprints, and a control logic circuit for supporting memory synchronization instructions. This supports hardware transaction memory in detecting transaction conflicts. This thread-level speculation mechanism includes a priority up bit for recording an attribute operand in a memory synchronization instruction, a means for generating a priority up event when a thread wake-up event has occurred and the priority up bit is 1, and a means for preventing the CAM from storing the load/store address when the instruction is a non-transaction instruction. | 04-24-2014 |
20140122791 | SYSTEM AND METHOD FOR PACKET CLASSIFICATION AND INTERNET PROTOCOL LOOKUP IN A NETWORK ENVIRONMENT - An example method includes partitioning a memory element of a router into a plurality of segments having one or more rows, where at least a portion of the one or more rows is encoded with a value mask (VM) list having a plurality of values and masks. The VM list is identified by a label, and the label is mapped to a base row number and a specific number of bits corresponding to the portion encoding the VM list. Another example method includes partitioning a prefix into a plurality of blocks, indexing to a hash table using a value of a specific block, where a bucket of the hash table corresponds to a segment of a ternary content addressable memory of a router, and storing the prefix in a row of the segment. | 05-01-2014 |
20140149655 | Narrowing Comparison Results of Associative Memories - A system including an associative memory. A first input device in communication with the associative memory is configured to receive comparison criteria associated with a first entity stored in the associative memory. A search engine is configured to acquire, using a processor in conjunction with the associative memory and also using the comparison criteria, an attribute category of the first entity and an attribute value of the first entity. A second input device is configured to input, using the processor in conjunction with the associative memory, the attribute category and the attribute value into a worksheet of the associative memory. A comparator is configured to compare, using the processor in conjunction with the associative memory, the first entity and a second entity. The comparator is further configured to apply the worksheet as part of comparing the first entity and the second entity. | 05-29-2014 |
20140149656 | HIERARCHICAL IMMUTABLE CONTENT-ADDRESSABLE MEMORY PROCESSOR - Improved memory management is provided according to a Hierarchical Immutable Content Addressable Memory Processor (HICAMP) architecture. In HICAMP, physical memory is organized as two or more physical memory blocks, each physical memory block having a fixed storage capacity. An indication of which of the physical memory blocks is active at any point in time is provided. A memory controller provides a non-duplicating write capability, where data to be written to the physical memory is compared to contents of all active physical memory blocks at the time of writing, to ensure that no two active memory blocks have the same data after completion of the non-duplicating write. | 05-29-2014 |
20140156924 | SEMICONDUCTOR MEMORY DEVICE WITH IMPROVED OPERATING SPEED AND DATA STORAGE DEVICE INCLUDING THE SAME - A semiconductor memory device includes a power block configured to generate an internal voltage based on an external voltage which is applied through a power pad; a circuit block configured to operate according to the internal voltage and drive memory cells; and a CAM (content addressed memory) block configured to operate according to the external voltage and store setting information necessary for driving of the memory cells. | 06-05-2014 |
20140173193 | TECHNIQUE FOR ACCESSING CONTENT-ADDRESSABLE MEMORY - A tag unit configured to manage a cache unit includes a coalescer that implements a set hashing function. The set hashing function maps a virtual address to a particular content-addressable memory unit (CAM). The coalescer implements the set hashing function by splitting the virtual address into upper, middle, and lower portions. The upper portion is further divided into even-indexed bits and odd-indexed bits. The even-indexed bits are reduced to a single bit using a XOR tree, and the odd-indexed are reduced in like fashion. Those single bits are combined with the middle portion of the virtual address to provide a CAM number that identifies a particular CAM. The identified CAM is queried to determine the presence of a tag portion of the virtual address, indicating a cache hit or cache miss. | 06-19-2014 |
20140181394 | DIRECTORY CACHE SUPPORTING NON-ATOMIC INPUT/OUTPUT OPERATIONS - Responsive to receiving a write request for a cache line from an input/output device, a caching agent of a first processor determines that the cache line is managed by a home agent of a second processor. The caching agent sends an ownership request for the cache line to the second processor. A home agent of the second processor receives the ownership request, generates an entry in a directory cache for the cache line, the entry identifying the remote caching agent as having ownership of the cache line, and grants ownership of the cache line to the remote caching agent. Responsive to receiving the grant of ownership for the cache line from the home agent an input/output controller of the first processor adds an entry for the cache line to an input/output write cache, the entry comprising a first indicator that the cache line is managed by the home agent of the second processor. | 06-26-2014 |
20140208016 | System and Method for Filtering Addresses - A method includes determining addresses, determining masks, and storing the masks in a ternary content-addressable-memory for matching a candidate address to the masks to determine matches to the addresses. The addresses include an address width and positions, the address width equal to the number of positions. Each mask matches one or more addresses, includes a mask width equal to the address width, and includes matching criteria for determining whether to filter a given address. The matching criteria includes a matching component specifying that an identified position in the address includes a particular value or a wildcard component specifying that an identified position in the address is to be ignored. The masks include at least one mask with a wildcard component. The number of masks is less than the number of the addresses. The number of possible addresses corresponding to the masks is equal to the number of the addresses. | 07-24-2014 |
20140215143 | CROSSBAR MEMORY TO PROVIDE CONTENT ADDRESSABLE FUNCTIONALITY - Examples disclose a crossbar memory with a first crossbar to write data values corresponding to a word. The crossbar memory further comprises a second crossbar, substantially parallel to the first crossbar, to receive voltage for activation of data values across the second crossbar. Additionally, the examples of the crossbar memory provide an output line that interconnects with the crossbars at junctions, to read the data values at the junctions. Further, the examples of the crossbar memory provide a logic module to determine whether the second crossbar data values correspond to the word written in the first crossbar. | 07-31-2014 |
20140215144 | ARCHITECTURE FOR TCAM SHARING - Aspects of the disclosure provide a packet processing system. The packet processing system includes a plurality of processing units, a ternary content addressable memory (TCAM) engine, and an interface. The plurality of processing units is configured to process packets received from a computer network, and to perform an action on a received packet. The action is determined responsively to a lookup in a table of rules to determine a rule to be applied to the received packet. The TCAM engine has a plurality of TCAM banks defining respective subsets of a TCAM memory space to store the rules. The interface is configured to selectably associate the TCAM banks to the processing units. The association is configurable to allocate the subsets of the TCAM memory space to groups of the processing units to share the TCAM memory space by the processing units. | 07-31-2014 |
20140223092 | Apparatus and Method for Distributing a Search Key in a Ternary Memory Array - Separate key processing units generate different search keys based off of a single master key received at a ternary memory array chip. A reference search key and selection logic are provided to reduce power dissipation in a global search key bus across the chip. The reference search key is the output of one of the key processing units and its bytes are compared with the output from each of the other key processing units. A select signal from each unit indicates which bytes match. Each matching byte at each key processing unit is blocked from changing corresponding bit line logic values across the chip, reducing the number of voltage switches occurring in the global search key bus. The select signal causes a selection module local to each superblock to select the matching byte(s) from the reference search key and non-matching byte(s) from the global search key bus to reconstitute the entire search key. | 08-07-2014 |
20140223093 | HYBRID DYNAMIC-STATIC ENCODER WITH OPTIONAL HIT AND/OR MULTI-HIT DETECTION - The hybrid dynamic-static encoder described herein may combine dynamic and static structural and logical design features that strategically partition dynamic nets and logic to substantially eliminate redundancy and thereby provide area, power, and leakage savings relative to a fully dynamic encoder with an equivalent logic delay. For example, the hybrid dynamic-static encoder may include identical top and bottom halves, which may be combined to produce final encoded index, hit, and multi-hit outputs. Each encoder half may use a dynamic net for each index bit with rows that match a search key dotted. If a row has been dotted to indicate that the row matches the search key, the dynamic nets associated therewith may be evaluated to reflect the index associated with the row. Accordingly, the hybrid dynamic-static encoder may have a reduced set of smaller dynamic nets that leverage redundant pull-down structures across the index, hit, and multi-hit dynamic nets. | 08-07-2014 |
20140281208 | Associative Look-up Instruction for a Processor Instruction Set Architecture - An associative look-up instruction for an instruction set architecture (ISA) of a processor and methods for use of an associative look-up instruction. The associative look-up instruction of the ISA specifies one or more fields within a data unit that are used as a pattern of bits for identifying data content in a memory structure to be loaded into hardware registers or other storage components of the ISA. Specified parameters of the associative operation may be explicit within the instruction or indirectly pointed to via hardware registers or other storage components of the ISA. The memory structure may be content addressable memory (CAM). | 09-18-2014 |
20140325138 | REPLICATING TAG ENTRIES FOR RELIABILITY ENHANCEMENT IN CACHE TAG ARRAYS - Technologies are generally described for exploiting program phase behavior to duplicate most recently and/or frequently accessed tag entries in a Tag Replication Buffer (TRB) to protect the information integrity of tag arrays in a processor cache. The reliability/effectiveness of microprocessor cache performance may be further improved by capturing/duplicating tags of dirty cache lines, exploiting the fact that detected error-corrupted clean cache lines can be recovered by L2 cache. A deterministic TRB replacement triggered early write-back scheme may provide full duplication and recovery of single-bit errors for tags of dirty cache lines. | 10-30-2014 |
20140325139 | LOW POWER, HASH-CONTENT ADDRESSABLE MEMORY ARCHITECTURE - A method is comprised of inputting a comparand word to a plurality of hash circuits, each hash circuit being responsive to a different portion of the comparand word. The hash circuits output a hash signal which is used to enable or precharge portions of a CAM. The comparand word is also input to the CAM. The CAM compares the comparand word in the precharged portions of the CAM and outputs information responsive to the comparing step. When used to process Internet addresses, the information output may be port information or an index from which port information may be located. A circuit is also disclosed as is a method of initializing the circuit. | 10-30-2014 |
20150012695 | APPARATUS AND METHOD FOR MULTI-MODE STORAGE - According to an example, multi-mode storage may include operating a first array including a first memory and a second array including a second memory in one or more modes of operation. The first memory may be a relatively denser memory compared to the second memory and the second memory may be a relatively faster memory compared to the first memory. The modes of operation may include a first mode of operation where the first array functions as the relatively denser memory compared to the second memory and the second array functions as the relatively faster memory compared to the first memory, a second mode of operation where the second array is operated as an automatic cache of a portion of a dataset, and a third mode of operation where a cache-tag functionality used to support the second mode of operation is instead used to provide a CAM. | 01-08-2015 |
20150039823 | TABLE LOOKUP APPARATUS USING CONTENT-ADDRESSABLE MEMORY BASED DEVICE AND RELATED TABLE LOOKUP METHOD THEREOF - A table lookup apparatus has a content-addressable memory (CAM) based device and a first cache. The CAM based device is used to store at least one table. The first cache is coupled to the CAM based device, and used to cache at least one input search key of the CAM based device and at least one corresponding search result. Besides, the table lookup apparatus may further includes a plurality of second caches and an arbiter. Each second cache is used to cache at least one input search key of the CAM based device and at least one corresponding search result. The arbiter is coupled between the first cache and each of the second caches, and used to arbitrate access of the first cache between the second caches. | 02-05-2015 |
20150046643 | Clustering with Virtual Entities Using Associative Memories - A system including an associative memory and a first input device in communication with the associative memory. The first input device is configured to receive an attribute value relating to a corresponding attribute of a subject of interest to a user. The system also includes a processor, in communication with the first input device, and configured to generate a first entity using the attribute value. The system also includes an associative memory configured to perform an analogy query using the entity to retrieve a second entity whose attributes match some attributes of the first entity. The associative memory is further configured to cluster first data in the first entity and second data in the second entity. | 02-12-2015 |
20150052298 | MAPPING A LOOKUP TABLE TO PREFABRICATED TCAMS - Access is obtained to a truth table having a plurality of rows, each including a plurality of input bits and a plurality of output bits. At least some rows include don't-care inputs. At least some of the rows are clustered into a plurality of multi-row clusters. At least some of the multi-row clusters are assigned to ternary content-addressable memory modules of a prefabricated programmable memory array. Instructions for interconnecting the ternary content-addressable memory modules with a plurality of input pins of the prefabricated programmable memory array and a plurality of output pins of the prefabricated programmable memory array are specified in a data structure, in order to implement the truth table. | 02-19-2015 |
20150058551 | PICO ENGINE POOL TRANSACTIONAL MEMORY ARCHITECTURE - A transactional memory (TM) includes a selectable bank of hardware algorithm prework engines, a selectable bank of hardware lookup engines, and a memory unit. The memory unit stores result values (RVs), instructions, and lookup data operands. The transactional memory receives a lookup command across a bus from one of a plurality of processors. The lookup command includes a source identification value, data, a table number value, and a table set value. In response to the lookup command, the transactional memory selects one hardware algorithm prework engine and one hardware lookup engine to perform the lookup operation. The selected hardware algorithm prework engine modifies data included in the lookup command. The selected hardware lookup engine performs a lookup operation using the modified data and lookup operands provided by the memory unit. In response to performing the lookup operation, the transactional memory returns a result value and optionally an instruction. | 02-26-2015 |
20150127900 | TERNARY CONTENT ADDRESSABLE MEMORY UTILIZING COMMON MASKS AND HASH LOOKUPS - A ternary content-addressable memory (TCAM) that is implemented based on other types of memory (e.g., SRAM) in conjunction with processing, including hashing functions. Such a H-TCAM may be used, for example, in implementation of routing equipment. A method of storing routing information on a network device, the routing information comprising a plurality of entries, each entry has a key value and a mask value, commences by identifying a plurality of groups, each group comprising a subset number of entries having a different common mask. The groups are identified by determining a subset number of entries that have a common mask value, meaning at least a portion of the mask value that is the same for all entries of the subset number of entries. | 05-07-2015 |
20150309937 | Intelligence cache and intelligence terminal - The disclosure discloses an intelligence cache and an intelligence terminal, wherein the intelligence cache comprises: a general interface, configured to receive configuration information and/or control information, and/or data information from a core a bus, and return target data; a software define and reconfiguration unit configured to define a memory as a required cache memory according to the configuration information; a control unit, configured to control writing and reading of the cache memory and monitor instructions and data streams in real time; a memory unit, composed of a number of memory modules and configured to cache data; the required cache memory is formed by memory modules according to the definition of the software define and reconfiguration unit; and an intelligence processing unit, configured to process input and output data and transfer, convert and operate on data among multiple structures defined in the control unit. The disclosure can realize an efficient memory system according to the operating status of software, the features of tasks to be executed and the features of data structures through the flexible organization and management by the control unit and the close cooperation of the intelligence processing unit. | 10-29-2015 |
20150326480 | CONDITIONAL ACTION FOLLOWING TCAM FILTERS - A method for providing a conditional action following TCAM lookup is disclosed. The method for providing a conditional action following TCAM lookup includes obtaining data; generating a lookup key from the data; performing a TCAM lookup using the key; and in the event the TCAM lookup generates a match, then performing an test to determine if there is exists a condition associated with the action associated with that match, and in the event the there is a condition, evaluating said condition associated with the action of that match entry; and in the event that said condition is satisfied, then performing a conditional action. The data may be from a communications packet header, the condition evaluation may be one of packet length or Time to Live (TTL) value, and the action taken may be one of dropping or forwarding a communications packet. The method for providing a conditional action following TCAM lookup is particularly useful for reducing the quantity of entries in a TCAM of TCAM filters known in the art. | 11-12-2015 |
20150333929 | DYNAMIC TERNARY CONTENT-ADDRESSABLE MEMORY CARVING - Example embodiments of the present disclosure describe mechanisms for dynamic carving (i.e., applying a revised template for ternary content addressable memory (TCAM) in a network switch while the TCAM remains operational). The TCAM comprises a plurality of TCAM allocation units (TAUs) and entries of data in the TCAM corresponding to forwarding modes are arranged according to an original template mapping each forwarding mode to a subset of TAU(s). An important characteristic of such methods and systems is the ability to avoid rebooting the network switch. The mechanisms include a “compression” step involving relocating entries of data in the TCAM according to an intermediate template, wherein the intermediate template comprises at least one unallocated TAU(s) for accommodating the revised template. Furthermore, the mechanisms include, a “decompression” step (after the “compression” step), involving relocating the entries of data in the TCAM according to the revised template. | 11-19-2015 |
20150339222 | CONTENT ADDRESSABLE MEMORY AND SEMICONDUCTOR DEVICE - In a memory, multiple pieces of entry data sorted in ascending or descending order are stored associated with addresses. With whole addresses for storing the multiple pieces of entry data as an initial search area, the search circuit repeatedly performs a search operation for comparing entry data stored in a central address of the search area with the search data, outputting the address as a search result in the case of a match, and narrowing the search area for the next search based on a magnitude comparison result in the case of a mismatch. | 11-26-2015 |
20150339240 | ATOMICALLY UPDATING TERNARY CONTENT ADDRESSABLE MEMORY-BASED ACCESS CONTROL LISTS - Embodiments described herein provide techniques for atomically updating a ternary content addressable memory (TCAM)-based access control list (ACL). According to one embodiment, a current version bit of the ACL is determined. The current version bit indicates that a rule in the ACL is active is the version flag in the rule matches the current version bit. Through these techniques, a first set of rules can be modified to create a second set of rules (e.g., by insertions, deletions, and replacements, etc.). | 11-26-2015 |
20150341270 | SUPPORTING ACCESS CONTROL LIST RULES THAT APPLY TO TCP SEGMENTS BELONGING TO 'ESTABLISHED' CONNECTION - Embodiments presented herein provide a TCAM-based access control list that supports disjunction operations in rules. According to one embodiment, a numeric range table is tied to the access control list. Each entry in the numeric range table includes an encode field that provides for scanning TCP flags in a TCP header of an incoming Ethernet frame. Further, each entry provides a first mask and a second mask used to test for desired set and unset TCP flags in a given frame. Each entry also provides an operation field that performs a disjunction operation that compares the first mask, the second mask, and set TCP flags in a given frame. | 11-26-2015 |
20150341316 | ACCESS CONTROL LIST-BASED PORT MIRRORING TECHNIQUES - Embodiments presented herein describe techniques for selecting incoming network frames to be mirrored using an access control list. According to one embodiment, an incoming frame is received. Upon determining that the incoming frame matches an entry in the access control list, a mirror field of the entry is evaluated. The mirror field identifies at least one mirroring action to perform on the frame. The identified mirroring action is performed on the frame. | 11-26-2015 |
20150341364 | ATOMICALLY UPDATING TERNARY CONTENT ADDRESSABLE MEMORY-BASED ACCESS CONTROL LISTS - Embodiments described herein provide techniques for atomically updating a ternary content addressable memory (TCAM)-based access control list (ACL). According to one embodiment, a current version bit of the ACL is determined. The current version bit indicates that a rule in the ACL is active is the version flag in the rule matches the current version bit. Through these techniques, a first set of rules can be modified to create a second set of rules (e.g., by insertions, deletions, and replacements, etc.). | 11-26-2015 |
20150341365 | ACCESS CONTROL LIST-BASED PORT MIRRORING TECHNIQUES - Embodiments presented herein describe techniques for selecting incoming network frames to be mirrored using an access control list. According to one embodiment, an incoming frame is received. Upon determining that the incoming frame matches an entry in the access control list, a mirror field of the entry is evaluated. The mirror field identifies at least one mirroring action to perform on the frame. The identified mirroring action is performed on the frame. | 11-26-2015 |
20150358290 | USE OF STATELESS MARKING TO SPEED UP STATEFUL FIREWALL RULE PROCESSING - A novel method for stateful packet classification that uses hardware resources for performing stateless lookups and software resources for performing stateful connection flow handshaking is provided. To classify an incoming packet from a network, some embodiments perform stateless look up operations for the incoming packet in hardware and forward the result of the stateless look up to the software. The software in turn uses the result of the stateless look up to perform the stateful connection flow handshaking and to determine the result of the stateful packet classification. | 12-10-2015 |
20160026391 | Content-Addressable Memory Device - Techniques described herein are generally related to storing and retrieving data from a content-addressable memory (CAM). A data value to be stored in the CAM may be received, where the data value has two or more bits. The CAM may include a plurality of memory sets. An index corresponding to the data value may be determined. The index may be determined based on a subset of bits of the data value that correspond to an index bit set. A memory set of the CAM may be identified based on the determined index and the data value may be stored in a storage unit of the identified memory set. | 01-28-2016 |
20160048449 | PARALLEL TURBINE TERNARY CONTENT ADDRESSABLE MEMORY FOR HIGH-SPEED APPLICATIONS - A parallel turbine ternary content addressable memory includes one or more atoms in each of one or more rows, wherein each of the one or more atoms includes a memory with N addresses and a width of M logical lookup entries, wherein N and M are integers, one or more result registers, each with a width of M, wherein a number of the one or more result registers equals a number of one or more keys each with a length of N, and a read pointer configured to cycle through a row of the N addresses per clock cycle for comparison between the M logical entries and the one or more keys with a result of the comparison stored in an associated result register for each of the one or more keys. | 02-18-2016 |
20160077957 | DECODING TECHNIQUES USING A PROGRAMMABLE PRIORITY ENCODER - A system, computer-readable media, and methods are disclosed for building a decoding table. The system may include one or more registers configured to store a data value based on an order in which one or more lengths were obtained. The system may also include a programmable priority encoder configured to scan the one or more registers for the data value. Further, the system may include a memory configured to store, based on locations of the data value in the one or more registers, at least one of encoding values or letters. | 03-17-2016 |
20160085473 | Asynchronous Processing of Mapping Information - Functionality is disclosed herein for providing an asynchronous processing service for processing storage mapping information. The asynchronous processing service is configured to receive a storage request including identification of a storage object and a description of a storage operation, perform the storage operation for the storage object in response to receiving the storage request, and asynchronously update mapping information for the performed storage operation. | 03-24-2016 |
20160103611 | SEARCHING MEMORY FOR A SEARCH KEY - To produce output from a memory block, a first index is used to access a pointer, a mode select and a function select from a first memory. The pointer, the mode select and the function select are used to produce a second index. The pointer is used to produce the second index when the mode select is a first value. A function is used to produce the second index when the mode select is a second value. The function select identifies a function to be used to produce the second index. The second index is used to access output from a second memory. | 04-14-2016 |
20160105363 | MEMORY SYSTEM FOR MULTIPLE CLIENTS - Output is produced from a content addressable memory block. Bus select logic is configured to operate on data from a selected client bus from a plurality of client buses. Each client bus includes a key bus section and an operation bus section. A plurality of output indices is stored within a key memory. Each output index in the plurality of output indices is stored with an associated key. A key memory index is generated based on a search key received from the key bus section for the selected client bus. The key memory index is used to access from the key memory an output index from the plurality of output indices. The output index is output to a priority bus associated with the selected client bus output logic. | 04-14-2016 |
20160140050 | METHOD AND SYSTEM FOR COMPRESSING DATA FOR A TRANSLATION LOOK ASIDE BUFFER (TLB) - An embodiment of the present disclosure includes a method for compressing data for a translation look aside buffer (TLB). The method includes: receiving an identifier at a content addressable memory (CAM), the identifier having a first bit length; compressing the identifier based on a location within the CAM the identifier is stored, the compressed identifier having a second bit length, the second bit length being smaller than the first bit length; and mapping at least the compressed identifier to a physical address in a buffer. | 05-19-2016 |
20160179675 | GRANULAR CACHE REPAIR | 06-23-2016 |
20160202932 | System and Method for Achieving Atomicity In Ternary Content-Addressable Memories | 07-14-2016 |