Patent application number | Description | Published |
20080222358 | METHOD AND SYSTEM FOR PROVIDING AN IMPROVED STORE-IN CACHE - A system and method of providing a cache system having a store-in policy and affording the advantages of store-in cache operation, while simultaneously providing protection against soft-errors in locally modified data, which would normally preclude the use of a store-in cache when reliability is paramount. The improved store-in cache mechanism includes a store-in L1 cache, at least one higher-level storage hierarchy; an ancillary store-only cache (ASOC) that holds most recently stored-to lines of the store-in L1 cache, and a cache controller that controls storing of data to the ancillary store-only cache (ASOC) and recovering of data from the ancillary store-only cache (ASOC) such that the data from the ancillary store-only cache (ASOC) is used only if parity errors are encountered in the store-in L1 cache. | 09-11-2008 |
20080263326 | METHOD AND APPARATUS FOR AN EFFICIENT MULTI-PATH TRACE CACHE DESIGN - A novel trace cache design and organization to efficiently store and retrieve multi-path traces. A goal is to design a trace cache, which is capable of storing multi-path traces without significant duplication in the traces. Furthermore, the effective access latency of these traces is reduced. | 10-23-2008 |
20080270702 | METHOD AND APPARATUS FOR AN EFFICIENT MULTI-PATH TRACE CACHE DESIGN - A novel trace cache design and organization to efficiently store and retrieve multi-path traces. A goal is to design a trace cache, which is capable of storing multi-path traces without significant duplication in the traces. Furthermore, the effective access latency of these traces is reduced. | 10-30-2008 |
20080270766 | Method To Reduce The Number Of Load Instructions Searched By Stores and Snoops In An Out-Of-Order Processor - A method for reducing the number of load instructions in the load reorder queue (LRQ) that are searched when a load instruction is executed by a processor, including dispatching the load instructions; inserting the load instructions in the LRQ in program order; clearing a load received data field; executing the load instructions; checking load reorder queue (LRQ) entries; re-executing the load instruction of the matching LRQ entry; continuing execution; getting the load data; setting the load received data field; comparing a load sequence number (LSQN) of each load instruction to a snoop_safe register contents; ANDing all the load received data bits if the LSQN is greater in magnitude to the snoop_safe; setting the snoop_safe register to the LSQN of the load instruction; searching the LRQ entry; and setting a load_peril_snoop register to the LRQ index value where the first load instruction younger to the snoop_safe was found. | 10-30-2008 |
20080276077 | Method To Reduce The Number Of Load Instructions Searched By Stores And Snoops In An Out-Of-Order Processor - A method for reducing the number of load instructions in the load reorder queue (LRQ) that are searched when a load instruction is executed by a processor, including dispatching the load instructions; inserting the load instructions in the LRQ in program order; clearing a load received data field; executing the load instructions; checking load reorder queue (LRQ) entries; re-executing the load instruction of the matching LRQ entry; continuing execution; getting the load data; setting the load received data field; comparing a load sequence number (LSQN) of each load instruction to a snoop_safe register contents; ANDing all the load received data bits if the LSQN is greater in magnitude to the snoop_safe; setting the snoop_safe register to the LSQN of the load instruction; searching the LRQ entry; and setting a load_peril_snoop register to the LRQ index value where the first load instruction younger to the snoop_safe was found. | 11-06-2008 |
20080313445 | METHOD AND SYSTEM FOR PREVENTING LIVELOCK DUE TO COMPETING UPDATES OF PREDICTION INFORMATION - A system to prevent livelock. An outcome of an event is predicted to form an event outcome prediction. The event outcome prediction is compared with a correct value for a datum to be accessed. An instruction is appended with a real event outcome when the outcome of the event is mispredicted to form an appended instruction. A prediction override bit is set on the appended instruction. Then, the appended instruction is executed with the real event outcome. | 12-18-2008 |
20090083492 | COST-CONSCIOUS PRE-EMPTIVE CACHE LINE DISPLACEMENT AND RELOCATION MECHANISMS - A hardware based method for determining when to migrate cache lines to the cache bank closest to the requesting processor to avoid remote access penalty for future requests. In a preferred embodiment, decay counters are enhanced and used in determining the cost of retaining a line as opposed to replacing it while not losing the data. In one embodiment, a minimization of off-chip communication is sought; this may be particularly useful in a CMP environment. | 03-26-2009 |
20090089602 | METHOD AND SYSTEM OF PEAK POWER ENFORCEMENT VIA AUTONOMOUS TOKEN-BASED CONTROL AND MANAGEMENT - A method of power management of a system of connected components includes initializing a token allocation map across the connected components, wherein each component is assigned a power budget as determined by a number of allocated tokens in the token allocation map, monitoring utilization sensor inputs and command state vector inputs, determining, at first periodic time intervals, a current performance level, a current power consumption level and an assigned power budget for the system based on the utilization sensor inputs and the command state vector inputs, and determining, at second periodic time intervals, a token re-allocation map based on the current performance level, the current power consumption level and the assigned power budget for the system, according to a re-assigned power budget of at least one of the connected components, while enforcing a power consumption limit based on a total number of allocated tokens in the system. | 04-02-2009 |
20090144492 | STRUCTURE FOR IMPLEMENTING DYNAMIC REFRESH PROTOCOLS FOR DRAM BASED CACHE - A hardware description language (HDL) design structure embodied on a machine-readable data storage medium includes elements that when processed in a computer aided design system generates a machine executable representation of a device for implementing dynamic refresh protocols for DRAM based cache. The HDL design structure further includes a DRAM cache partitioned into a refreshable portion and a non-refreshable portion; and a cache controller configured to assign incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines; wherein cache lines corresponding to data having a usage history below a defined frequency are assigned by the controller to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache. | 06-04-2009 |
20090144503 | METHOD AND SYSTEM FOR INTEGRATING SRAM AND DRAM ARCHITECTURE IN SET ASSOCIATIVE CACHE - A method of integrating a hybrid architecture in a set associative cache having a first type of memory structure for one or more ways in each congruence class, and a second type of memory structure for the remaining ways of the congruence class, includes determining whether a memory access request results in a cache hit or a cache miss; in the event of a cache miss, determining whether LRU way of the first type memory structure is also the LRU way of the entire congruence class, and if not, then copying the contents of the LRU way of the first type memory structure into the LRU way of the entire congruence class, and filling the LRU way of the first type memory structure with a new cache line in the event of a cache miss; and updating LRU bits, depending upon the results of the memory access request. | 06-04-2009 |
20090144506 | METHOD AND SYSTEM FOR IMPLEMENTING DYNAMIC REFRESH PROTOCOLS FOR DRAM BASED CACHE - A method for implementing dynamic refresh protocols for DRAM based cache includes partitioning a DRAM cache into a refreshable portion and a non-refreshable portion, and assigning incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines. Cache lines corresponding to data having a usage history below a defined frequency are assigned to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache. | 06-04-2009 |
20090193186 | EMBEDDED DRAM HAVING MULTI-USE REFRESH CYCLES - An embedded DRAM (eDRAM) having multi-use refresh cycles is described. In one embodiment, there is a multi-level cache memory system that comprises a pending write queue configured to receive pending prefetch operations from at least one of the levels of cache. A prefetch queue is configured to receive prefetch operations for at least one of the levels of cache. A refresh controller is configured to determine addresses within each level of cache that are due for a refresh. The refresh controller is configured to assert a refresh write-in signal to write data supplied from the pending write queue specified for an address due for a refresh rather than refresh existing data. The refresh controller asserts the refresh write-in signal in response to a determination that there is pending data to supply to the address specified to have the refresh. The refresh controller is further configured to assert a refresh read-out signal to send refreshed data to the prefetch queue of a higher level of cache as a prefetch operation in response to a determination that the refreshed data is useful. | 07-30-2009 |
20090193187 | DESIGN STRUCTURE FOR AN EMBEDDED DRAM HAVING MULTI-USE REFRESH CYCLES - A design structure for an embedded DRAM (eDRAM) having multi-use refresh cycles is described. In one embodiment, there is a multi-level cache memory system that comprises a pending write queue configured to receive pending prefetch operations from at least one of the levels of cache. A prefetch queue is configured to receive prefetch operations for at least one of the levels of cache. A refresh controller is configured to determine addresses within each level of cache that are due for a refresh. The refresh controller is configured to assert a refresh write-in signal to write data supplied from the pending write queue specified for an address due for a refresh rather than refresh existing data. The refresh controller asserts the refresh write-in signal in response to a determination that there is pending data to supply to the address specified to have the refresh. The refresh controller is further configured to assert a refresh read-out signal to send refreshed data to the prefetch queue of a higher level of cache as a prefetch operation in response to a determination that the refreshed data is useful. | 07-30-2009 |
20110055515 | REDUCING BROADCASTS IN MULTIPROCESSORS - Disclosed is an apparatus to reduce broadcasts in multiprocessors including a plurality of processors; a plurality of memory caches associated with the processors; a plurality of translation lookaside buffers (TLBs) associated with the processors; and a physical memory shared with the processors memory caches and TLBs; wherein each TLB includes a plurality of entries for translation of a page of addresses from virtual memory to physical memory, each TLB entry having page characterization information indicating whether the page is private to one processor or shared with more than one processor. Also disclosed is a computer program product and method to reduce broadcasts in multiprocessors. | 03-03-2011 |
20110078412 | Processor Core Stacking for Efficient Collaboration - A mechanism is provided for improving the performance and efficiency of multi-core processors. A system controller in a data processing system determines an operational function for each primary processor core in a set of primary processor cores in a primary processor core logic layer and for each secondary processor core in a set of secondary processor cores in a secondary processor core logic layer, thereby forming a set of determined operational functions. The system controller then generates an initial configuration, based on the set of determined operational functions, for initializing the set of primary processor cores and the set of secondary processor cores in the three-dimensional processor core architecture. The initial configuration indicates how at least one primary processor core of the set of primary processor cores collaborate with at least one secondary processor core of the set of secondary processor cores. | 03-31-2011 |
20110161780 | METHOD AND SYSTEM FOR PROVIDING AN IMPROVED STORE-IN CACHE - A hardened store-in cache system includes a store-in cache having lines of a first linesize stored with checkbits, wherein the checkbits include byte-parity bits, and an ancillary store-only cache (ASOC) that holds a copy of most recently stored-to lines of the store-in cache. The ASOC includes fewer lines than the store-in cache, each line of the ASOC having the first linesize stored with the checkbits. | 06-30-2011 |
20110225401 | PREFETCHING BRANCH PREDICTION MECHANISMS - A method comprising receiving a branch instruction, decoding a branch address and the branch instruction, executing a branch action associated with the branch address, determining whether a branch associated with the branch action was taken, and saving an identifier of the branch instruction and in indicator that the branch action was taken in a prefetch history table responsive to determining that the branch associated with the branch action was taken. | 09-15-2011 |
20120210073 | WRITE-THROUGH CACHE OPTIMIZED FOR DEPENDENCE-FREE PARALLEL REGIONS - An apparatus, method and computer program product for improving performance of a parallel computing system. A first hardware local cache controller associated with a first local cache memory device of a first processor detects an occurrence of a false sharing of a first cache line by a second processor running the program code and allows the false sharing of the first cache line by the second processor. The false sharing of the first cache line occurs upon updating a first portion of the first cache line in the first local cache memory device by the first hardware local cache controller and subsequent updating a second portion of the first cache line in a second local cache memory device by a second hardware local cache controller. | 08-16-2012 |
20120284463 | PREDICTING CACHE MISSES USING DATA ACCESS BEHAVIOR AND INSTRUCTION ADDRESS - In a decode stage of hardware processor pipeline, one particular instruction of a plurality of instructions is decoded. It is determined that the particular instruction requires a memory access. Responsive to such determination, it is predicted whether the memory access will result in a cache miss. The predicting in turn includes accessing one of a plurality of entries in a pattern history table stored as a hardware table in the decode stage. The accessing is based, at least in part, upon at least a most recent entry in a global history buffer. The pattern history table stores a plurality of predictions. The global history buffer stores actual results of previous memory accesses as one of cache hits and cache misses. Additional steps include scheduling at least one additional one of the plurality of instructions in accordance with the predicting; and updating the pattern history table and the global history buffer subsequent to actual execution of the particular instruction in an execution stage of the hardware processor pipeline, to reflect whether the predicting was accurate. | 11-08-2012 |
20120331232 | WRITE-THROUGH CACHE OPTIMIZED FOR DEPENDENCE-FREE PARALLEL REGIONS - An apparatus and computer program product for improving performance of a parallel computing system. A first hardware local cache controller associated with a first local cache memory device of a first processor detects an occurrence of a false sharing of a first cache line by a second processor running the program code and allows the false sharing of the first cache line by the second processor. The false sharing of the first cache line occurs upon updating a first portion of the first cache line in the first local cache memory device by the first hardware local cache controller and subsequent updating a second portion of the first cache line in a second local cache memory device by a second hardware local cache controller. | 12-27-2012 |
20130007423 | PREDICTING OUT-OF-ORDER INSTRUCTION LEVEL PARALLELISM OF THREADS IN A MULTI-THREADED PROCESSOR - Systems and methods for predicting out-of-order instruction-level parallelism (ILP) of threads being executed in a multi-threaded processor and prioritizing scheduling thereof are described herein. One aspect provides for tracking completion of instructions using a global completion table having a head segment and a tail segment; storing prediction values for each instruction in a prediction table indexed via instruction identifiers associated with each instruction, a prediction value being configured to indicate an instruction is predicted to issue from one of: the head segment and the tail segment; and predicting threads with more instructions issuing from the tail segment have a higher degree of out-of-order instruction-level parallelism. Other embodiments and aspects are also described herein. | 01-03-2013 |
20130013977 | ADAPTIVE MULTI-BIT ERROR CORRECTION IN ENDURANCE LIMITED MEMORIES - Multi-bit stuck-at fault error recovery can be enabled by adaptive multi-bit error correction method, in which the overhead of error correction hardware is reduced without affecting the lifetime of the memory device. Error correction logic hardware is decoupled from memory blocks. An error correction logic block is partitioned such that error correction logic entries support different number of error correction capabilities based on the probability of occurrence of the different number of errors in different memory blocks. Faulty memory blocks are mapped to appropriate error correction logic entries. The mapping can be one-to-one or many-to-one depending on embodiments. The adaptive partitioning of the error correction logic entries can be configured to match projected statistical distribution of errors in logic blocks, and can reduce the total error correction logic overhead, provide sufficient error correction, and/or extend the lifetime of the memory device. | 01-10-2013 |
20130024662 | RELAXATION OF SYNCHRONIZATION FOR ITERATIVE CONVERGENT COMPUTATIONS - Systems and methods are disclosed that allow atomic updates to global data to be at least partially eliminated to reduce synchronization overhead in parallel computing. A compiler analyzes the data to be processed to selectively permit unsynchronized data transfer for at least one type of data. A programmer may provide a hint to expressly identify the type of data that are candidates for unsynchronized data transfer. In one embodiment, the synchronization overhead is reducible by generating an application program that selectively substitutes codes for unsynchronized data transfer for a subset of codes for synchronized data transfer. In another embodiment, the synchronization overhead is reducible by employing a combination of software and hardware by using relaxation data registers and decoders that collectively convert a subset of commands for synchronized data transfer into commands for unsynchronized data transfer. | 01-24-2013 |
20130205116 | MULTI-THREADED PROCESSOR INSTRUCTION BALANCING THROUGH INSTRUCTION UNCERTAINTY - A computer-implemented method for instruction execution in a pipeline, includes fetching, in the pipeline, a plurality of instructions, wherein the plurality of instructions includes a plurality of branch instructions, for each of the plurality of branch instructions, assigning a branch uncertainty to each of the plurality of branch instructions, for each of the plurality of instructions, assigning an instruction uncertainty that is a summation of branch uncertainties of older unresolved branches, and balancing the instructions, based on a current summation of instruction uncertainty, in the pipeline. | 08-08-2013 |
20130205118 | MULTI-THREADED PROCESSOR INSTRUCTION BALANCING THROUGH INSTRUCTION UNCERTAINTY - A computer system for instruction execution includes a processor having a pipeline. The system is configured to perform a method including fetching, in the pipeline, a plurality of instructions, wherein the plurality of instructions includes a plurality of branch instructions, for each of the plurality of branch instructions, assigning a branch uncertainty to each of the plurality of branch instructions, for each of the plurality of instructions, assigning an instruction uncertainty that is a summation of branch uncertainties of older unresolved branches and balancing the instructions, based on a current summation of instruction uncertainty, in the pipeline. | 08-08-2013 |
20140019689 | METHODS OF CACHE PRELOADING ON A PARTITION OR A CONTEXT SWITCH - A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown. | 01-16-2014 |
20140258621 | NON-DATA INCLUSIVE COHERENT (NIC) DIRECTORY FOR CACHE - Embodiments relate to a non-data inclusive coherent (NIC) directory for a symmetric multiprocessor (SMP) of a computer. An aspect includes determining a first eviction entry of a highest-level cache in a multilevel caching structure of the first processor node of the SMP. Another aspect includes determining that the NIC directory is not full. Another aspect includes determining that the first eviction entry of the highest-level cache is owned by a lower-level cache in the multilevel caching structure. Another aspect includes, based on the NIC directory not being full and based on the first eviction entry of the highest-level cache being owned by the lower-level cache, installing an address of the first eviction entry of the highest-level cache in a first new entry in the NIC directory. Another aspect includes invalidating the first eviction entry in the highest-level cache. | 09-11-2014 |
20150058569 | NON-DATA INCLUSIVE COHERENT (NIC) DIRECTORY FOR CACHE - Embodiments relate to a non-data inclusive coherent (NIC) directory for a symmetric multiprocessor (SMP) of a computer. An aspect includes determining a first eviction entry of a highest-level cache in a multilevel caching structure of the first processor node of the SMP. Another aspect includes determining that the NIC directory is not full. Another aspect includes determining that the first eviction entry of the highest-level cache is owned by a lower-level cache in the multilevel caching structure. Another aspect includes, based on the NIC directory not being full and based on the first eviction entry of the highest-level cache being owned by the lower-level cache, installing an address of the first eviction entry of the highest-level cache in a first new entry in the NIC directory. Another aspect includes invalidating the first eviction entry in the highest-level cache. | 02-26-2015 |
20150074356 | PROCESSOR WITH MEMORY-EMBEDDED PIPELINE FOR TABLE-DRIVEN COMPUTATION - A processor and a method implemented by the processor to obtain computation results are described. The processor includes a unified reuse table embedded in a processor pipeline, the unified reuse table including a plurality of entries, each entry of the plurality of entries corresponding with a computation instruction or a set of computation instructions. The processor also includes a functional unit to perform a computation based on a corresponding instruction. | 03-12-2015 |
20150074381 | PROCESSOR WITH MEMORY-EMBEDDED PIPELINE FOR TABLE-DRIVEN COMPUTATION - A processor and a method implemented by the processor to obtain computation results are described. The processor includes a unified reuse table embedded in a processor pipeline, the unified reuse table including a plurality of entries, each entry of the plurality of entries corresponding with a computation instruction or a set of computation instructions. The processor also includes a functional unit to perform a computation based on a corresponding instruction. | 03-12-2015 |