Class / Patent application number | Description | Number of patent applications / Date published |
711132000 | Stack cache | 20 |
20080229026 | System and method for concurrently checking availability of data in extending memories - This invention discloses an extended memory comprising a first tag RAM for storing one or more tags corresponding to data stored in a first storage module, and a second tag RAM for storing one or more tags corresponding to data stored in a second storage module, wherein the first and second storage modules are separated and independent memory units, the numbers of bits in the first and second tag RAMs differ, and an address is concurrently checked against both the first and second tag RAMs using a first predetermined bit field of the address for checking against a first tag from the first tag RAM and using a second predetermined bit field of the address for checking against a second tag from the second tag RAM. | 09-18-2008 |
20090150616 | SYSTEM AND METHOD OF USING THREADS AND THREAD-LOCAL STORAGE - A system is provided that includes processing logic and a memory management module. The memory management module is configured to allocate a portion of memory space for a thread stack unit and to partition the thread stack unit to include a stack and a thread-local storage region. The stack is associated with a thread that is executable by the processing logic and the thread-local storage region is adapted to store data associated with the thread. The portion of memory space allocated for the thread stack unit is based on a size of the thread-local storage region that is determined when the thread is generated and a size of the stack. | 06-11-2009 |
20090193194 | Method for Expediting Return of Line Exclusivity to a Given Processor in a Symmetric Multiprocessing Data Processing System - A method and apparatus for eliminating, in a multi-nodes data handling system, contention for exclusivity of lines in cache memory through improved management of system buses, processor cross-invalidate stacks, and the system operations that can lead to these requested cache operations being rejected. | 07-30-2009 |
20090307431 | MEMORY MANAGEMENT FOR CLOSURES - Methods, software media, compilers and programming techniques are described for creating copyable stack-based closures, such as a block, for languages which allocate automatic or local variables on a stack memory structure. In one exemplary method, a data structure of the block is first written to the stack memory structure, and this may be the automatic default operation, at run-time, for the block; then, a block copy instruction, added explicitly (in one embodiment) by a programmer during creation of the block, is executed to copy the block to a heap memory structure. The block includes a function pointer that references a function which uses data in the block. | 12-10-2009 |
20100077151 | HARDWARE TRIGGERED DATA CACHE LINE PRE-ALLOCATION - A computer system includes a data cache supported by a copy-back buffer and pre-allocation request stack. A programmable trigger mechanism inspects each store operation made by the processor to the data cache to see if a next cache line should be pre-allocated. If the store operation memory address occurs within a range defined by START and END programmable registers, then the next cache line that includes a memory address within that defined by a programmable STRIDE register is requested for pre-allocation. Bunches of pre-allocation requests are organized and scheduled by the pre-allocation request stack, and will take their turns to allow the cache lines being replaced to be processed through the copy-back buffer. By the time the processor gets to doing the store operation in the next cache line, such cache line has already been pre-allocated and there will be a cache hit, thus saving stall cycles. | 03-25-2010 |
20100095069 | Program Security Through Stack Segregation - For each process a stack data structure that includes two stacks, which are joined at their bases, is created. The two stacks include a normal stack, which grows downward, and an inverse stack, which grows upward. Items on the stack data structure are segregated into protected and unprotected classes. Protected items include frame pointers and return addresses, which are stored on the normal stack. Unprotected items are function parameters and local variables. The unprotected items are stored on the inverse stack. | 04-15-2010 |
20100281221 | Shared Data Prefetching with Memory Region Cache Line Monitoring - A method, circuit arrangement, and design structure for prefetching data for responding to a memory request, in a shared memory computing system of the type that includes a plurality of nodes, is provided. Prefetching data comprises, receiving, in response to a first memory request by a first node, presence data for a memory region associated with the first memory request from a second node that sources data requested by the first memory request, and selectively prefetching at least one cache line from the memory region based on the received presence data. Responding to a memory request comprises tracking presence data associated with memory regions associated with cached cache lines in the first node, and, in response to a memory request by a second node, forwarding the tracked presence data for a memory region associated with the memory request to the second node. | 11-04-2010 |
20100332764 | Modular Three-Dimensional Chip Multiprocessor - A chip multiprocessor die supports optional stacking of additional dies. The chip multiprocessor includes a plurality of processor cores, a memory controller, and stacked cache interface circuitry. The stacked cache interface circuitry is configured to attempt to retrieve data from a stacked cache die if the stacked cache die is present but not if the stacked cache die is absent. In one implementation, the chip multiprocessor die includes a first set of connection pads for electrically connecting to a die package and a second set of connection pads for communicatively connecting to the stacked cache die if the stacked cache die is present. Other embodiments, aspects and features are also disclosed. | 12-30-2010 |
20130061000 | SOFTWARE COMPILER GENERATED THREADED ENVIRONMENT - A computer-implemented method for creating a threaded package of computer executable instructions from software compiler generated code includes allocating, through a computer processor, the computer executable instructions into a plurality of stacks, differentiating between different types of computer executable instructions for each computer executable instruction allocated to each stack of the plurality of stacks, creating switch points for each stack of the plurality of stacks based upon the differentiating, and inserting the switch points within each stack of the plurality of stacks. | 03-07-2013 |
20130145098 | MEMORY PREFETCH SYSTEMS AND METHODS - Systems and methods are disclosed herein, including those that operate to prefetch a programmable number of data words from a selected memory vault in a stacked-die memory system when a pipeline associated with the selected memory vault is empty. | 06-06-2013 |
20140143497 | STACK CACHE MANAGEMENT AND COHERENCE TECHNIQUES - A processor system presented here has a plurality of execution cores and a plurality of stack caches, wherein each of the stack caches is associated with a different one of the execution cores. A method of managing stack data for the processor system is presented here. The method maintains a stack cache manager for the plurality of execution cores. The stack cache manager includes entries for stack data accessed by the plurality of execution cores. The method processes, for a requesting execution core of the plurality of execution cores, a virtual address for requested stack data. The method continues by accessing the stack cache manager to search for an entry of the stack cache manager that includes the virtual address for requested stack data, and using information in the entry to retrieve the requested stack data. | 05-22-2014 |
20140143498 | METHODS AND APPARATUS FOR FILTERING STACK DATA WITHIN A CACHE MEMORY HIERARCHY - A method of storing stack data in a cache hierarchy is provided. The cache hierarchy comprises a data cache and a stack filter cache. Responsive to a request to access a stack data block, the method stores the stack data block in the stack filter cache, wherein the stack filter cache is configured to store any requested stack data block. | 05-22-2014 |
20140143499 | METHODS AND APPARATUS FOR DATA CACHE WAY PREDICTION BASED ON CLASSIFICATION AS STACK DATA - A method of way prediction for a data cache having a plurality of ways is provided. Responsive to an instruction to access a stack data block, the method accesses identifying information associated with a plurality of most recently accessed ways of a data cache to determine whether the stack data block resides in one of the plurality of most recently accessed ways of the data cache, wherein the identifying information is accessed from a subset of an array of identifying information corresponding to the plurality of most recently accessed ways; and when the stack data block resides in one of the plurality of most recently accessed ways of the data cache, the method accesses the stack data block from the data cache. | 05-22-2014 |
20140164708 | SPILL DATA MANAGEMENT - A processor discards spill data from a memory hierarchy in response to the final access to the spill data has been performed by a compiled program executing at the processor. In some embodiments, the final access determined based on a special-purpose load instruction configured for this purpose. In some embodiments the determination is made based on the location of a stack pointer indicating that a method of the executing program has returned, so that data of the returned method that remains in the stack frame is no longer to be accessed. Because the spill data is discarded after the final access, it is not transferred through the memory hierarchy. | 06-12-2014 |
20140201453 | Context Switching with Offload Processors - A context switching cache system is disclosed. The system can include a plurality of offload processors connected to a memory bus, each offload processor having a cache with an associated cache state, a context memory coupled to the offload processors, and a scheduling circuit configured to direct transfer of a cache state between at least one of the offload processors and the context memory. | 07-17-2014 |
20140297961 | SELECTIVE CACHE FILLS IN RESPONSE TO WRITE MISSES - A cache memory receives a request to perform a write operation. The request specifies an address. A first determination is made that the cache memory does not include a cache line corresponding to the address. A second determination is made that the address is between a previous value of a stack pointer and a current value of the stack pointer. A third determination is made that a write history indicator is set to a specified value. The write operation is performed in the cache memory without waiting for a cache fill corresponding to the address to be performed, in response to the first, second, and third determinations. | 10-02-2014 |
20140379986 | STACK ACCESS TRACKING - A processor employs a prediction table at a front end of its instruction pipeline, whereby the prediction table stores address register and offset information for store instructions; and stack offset information for stack access instructions. The stack offset information for a corresponding instruction indicates the entry of the stack accessed by the instruction stack relative to a base entry. The processor uses pattern matching to identify predicted dependencies between load/store instructions and predicted dependencies between stack access instructions. A scheduler unit of the instruction pipeline uses the predicted dependencies to perform store-to-load forwarding or other operations that increase efficiency and reduce power consumption at the processing system. | 12-25-2014 |
20150106569 | CHIP STACK CACHE EXTENSION WITH COHERENCY - By arranging dies in a stack such that failed cores are aligned with adjacent good cores, fast connections between good cores and cache of failed cores can be implemented. Cache can be allocated according to a priority assigned to each good core, by latency between a requesting core and available cache, and/or by load on a core. | 04-16-2015 |
20160011981 | METHOD AND DEVICE FOR STORING DATA IN A MEMORY, CORRESPONDING APPARATUS AND COMPUTER PROGRAM PRODUCT | 01-14-2016 |
20160103766 | LOOKUP OF A DATA STRUCTURE CONTAINING A MAPPING BETWEEN A VIRTUAL ADDRESS SPACE AND A PHYSICAL ADDRESS SPACE - A memory region stores a data structure that contains a mapping between a virtual address space and a physical address space of a memory. A portion of the mapping is cached in a cache memory. In response to a miss in the cache memory responsive to a lookup of a virtual address of a request, an indication is sent to the buffer device. In response to the indication, a hardware controller on the buffer device performs a lookup of the data structure in the memory region to find a physical address corresponding to the virtual address. | 04-14-2016 |