Entries |
Document | Title | Date |
20080244187 | PIPELINING D STATES FOR MRU STEERAGE DURING MRU-LRU MEMBER ALLOCATION - A method and apparatus for preventing selection of Deleted (D) members as an LRU victim during LRU victim selection. During each cache access targeting the particular congruence class, the deleted cache line is identified from information in the cache directory. A location of a deleted cache line is pipelined through the cache architecture during LRU victim selection. The information is latched and then passed to MRU vector generation logic. An MRU vector is generated and passed to the MRU update logic, which is selects/tags the deleted member as a MRU member. The make MRU operation affects only the lower level LRU state bits arranged in a tree-based structure state bits so that the make MRU operation only negates selection of the specific member in the D state, without affecting LRU victim selection of the other members. | 10-02-2008 |
20090031084 | CACHE LINE REPLACEMENT TECHNIQUES ALLOWING CHOICE OF LFU OR MFU CACHE LINE REPLACEMENT - Methods and apparatus allowing a choice of Least Frequently Used (LFU) or Most Frequently Used (MFU) cache line replacement are disclosed. The methods and apparatus determine new state information for at least two given cache lines of a number of cache lines in a cache, the new state information based at least in part on prior state information for the at least two given cache lines. Additionally, when an access miss occurs in one of the at least two given lines, the methods and apparatus (1) select either LFU or MFU replacement criteria, and (2) replace one of the at least two given cache lines based on the new state information and the selected replacement criteria. Additionally, a cache for replacing MFU cache lines is disclosed. The cache additionally comprises MFU circuitry (1) adapted to produce new state information for the at least two given cache lines in response to an access to one of the at least two given cache lines, and (2) when a cache miss occurs in one of the at least two given cache lines, adapted to determine, based on the new state information, which of the at least two given cache lines is the most frequently used cache line. | 01-29-2009 |
20090037662 | Method for Selectively Enabling and Disabling Read Caching in a Storage Subsystem - A mechanism for selectively disabling and enabling read caching based on past performance of the cache and current read/write requests. The system improves overall performance by using an autonomic algorithm to disable read caching for regions of backend disk storage (i.e., the backstore) that have had historically low cache hit ratios. The result is that more cache becomes available for workloads with larger hit ratios, and less time and machine cycles are spent searching the cache for data that is unlikely to be there. | 02-05-2009 |
20090043967 | Caching of information according to popularity - A system includes logic to cache at least one block in at least one cache if the block has a popularity that compares favorably to the popularity of other blocks in the cache, where the popularity of the block is determined by reads of the block from persistent storage and reads of the block from the cache. | 02-12-2009 |
20090055594 | System for and method of capturing application characteristics data from a computer system and modeling target system - A system for, method of and computer program product captures performance-characteristic data from the execution of a program and models system performance based on that data. Performance-characterization data based on easily captured reuse distance metrics is targeted, defined as the total number of memory references between two accesses to the same piece of data. Methods for efficiently capturing this kind of metrics are described. These data can be refined into easily interpreted performance metrics, such as performance data related to caches with LRU replacement and random replacement strategies in combination with fully associative as well as limited associativity cache organizations. | 02-26-2009 |
20090063776 | SECOND CHANCE REPLACEMENT MECHANISM FOR A HIGHLY ASSOCIATIVE CACHE MEMORY OF A PROCESSOR - A cache memory system includes a cache memory and a block replacement controller. The cache memory may include a plurality of sets, each set including a plurality of block storage locations. The block replacement controller may maintain a separate count value corresponding to each set of the cache memory. The separate count value points to an eligible block storage location within the given set to store replacement data. The block replacement controller may maintain for each of at least some of the block storage locations, an associated recent access bit indicative of whether the corresponding block storage location was recently accessed. In addition, the block replacement controller may store the replacement data within the eligible block storage location pointed to by the separate count value depending upon whether a particular recent access bit indicates that the eligible block storage location was recently accessed. | 03-05-2009 |
20090089509 | CACHE LINE REPLACEMENT MONITORING AND PROFILING - Systems and methods for cache replacement monitoring (CRM) are provided. The system includes a monitored cache comprising a monitored cache line set, the monitored cache line set comprising at least one cache line capable of holding data of a monitored address; and a CRM mechanism operatively associated with the monitored cache. The CRM mechanism collects CRM information for the monitored address. The method includes the steps of collecting CRM information for a monitored address in a monitored cache; and recording the CRM information for the monitored address, when at least one of (1) the monitored address is cached in the monitored cache, (2) the monitored address is replaced in the monitored cache, (3) any cache line in a cache line set corresponding to the monitored address is cached in the monitored cache, and (4) any cache line in a cache line set corresponding to the monitored address is replaced in the monitored cache. | 04-02-2009 |
20090106496 | UPDATING CACHE BITS USING HINT TRANSACTION SIGNALS - A system comprises processing logic that issues a request associated with an address. The system comprises a first cache that comprises a plurality of line frames. Each line frame has a status bit indicative of how recently that line frame has been accessed. The system comprises a second cache comprising another line frame having another status bit that is indicative of how recently the another line frame has been accessed. The another line frame comprises data other than the another status bit. If one of the plurality of line frames comprises the data associated with the address and the status bit associated with the one of the plurality of line frames is in a predetermined state, the first cache generates a hint transaction signal which is used to update the another status bit. The hint transaction signal does not cause the data to be updated. | 04-23-2009 |
20090106497 | Apparatus, processor and method of controlling cache memory - An apparatus includes a processor which issues a plurality of commands including an identifier for classifying each of the commands, a cache memory which includes a plurality of ways to store a data corresponding to a command, wherein the cache memory includes a register to store the identifier, the register corresponding to at least one of the ways being fixed, the fixed way exclusively storing the data corresponding to the identifier during which the register stores the identifier, a replacement controller which selects a replacement way based on a predetermined replacement algorithm in case of a cache miss, and excludes the fixed way from a candidate of the replacement way when the register corresponding to the fixed way stores the identifier. | 04-23-2009 |
20090113137 | PSEUDO LEAST RECENTLY USED (PLRU) CACHE REPLACEMENT - A multi-way cache system includes multi-way cache storage circuitry, a pseudo least recently used (PLRU) tree state representative of a PLRU tree, the PLRU tree having a plurality of levels, and PLRU control circuitry coupled to the multi-way cache storage circuitry and the PLRU tree state. The PLRU control circuitry has programmable PLRU tree level update enable circuitry which selects Y levels of the plurality of levels of the PLRU tree to be updated. The PLRU control circuitry, in response to an address hitting or resulting in an allocation in the multi-way cache storage circuitry, updates only the selected Y levels of the PLRU tree state. | 04-30-2009 |
20090150617 | CACHE MECHANISM AND METHOD FOR AVOIDING CAST OUT ON BAD VICTIM SELECT AND RECYCLING VICTIM SELECT OPERATION - A method, apparatus, and computer for identifying selection of a bad victim during victim selection at a cache and recovering from such bad victim selection without causing the system to crash or suspend forward progress of the victim selection process. Among the bad victim selection addressed are recovery from selection of a deleted member and recovery from use of LRU state bits that do not map to a member within the congruence class. When LRU victim selection logic generates an output vector identifying a victim, the output vector is checked to ensure that it is a valid vector (non-null) and that it is not pointing to a deleted member. When the output vector is not valid or points to a deleted member, the LRU victim selection logic is triggered to re-start the victim selection process. | 06-11-2009 |
20090177844 | METHOD OF EFFICIENTLY CHOOSING A CACHE ENTRY FOR CASTOUT - The present invention relates generally to a method and system for efficiently identifying a cache entry for cast out in relation to scanning a predetermined sampling subset of pseudo-randomly sampled cached entries and determining a least recently used (LRU) entry from the scanned cached entries subset, thereby avoiding a comprehensive review of all of or groups of the cached entries in the cache at any instant. In one or more implementations, a subset of the data entries in a cache are randomly sampled, assessed by timestamp in a doubly-linked listing and a least recently used data entry to cast out is identified. | 07-09-2009 |
20090193196 | METHOD AND SYSTEM FOR CACHE EVICTION - The proposed system and associated algorithm when implemented improves the processor cache miss rates and overall cache efficiency in multi-core environments in which multiple CPU's share a single cache structure (as an example). The cache efficiency will be improved by tracking CPU core loading patterns such as miss rate and minimum cache line load threshold levels. Using this information along with existing cache eviction method such as LRU, results in determining which cache line from which CPU is evicted from the shared cache when a capacity conflict arises. This methodology allows one to dynamically allocate shared cache entries to each core within the socket based on the particular core's frequency of shared cache usage. | 07-30-2009 |
20090204767 | METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR GENERALIZED LRU IN CACHE AND MEMORY PERFORMANCE ANALYSIS AND MODELING - The exemplary embodiment of the present invention relates to a generalized LRU algorithm is provided that is associated with a specified cache associativity line set value that is determined by a system user. As configured, the LRU algorithm as presented can comprise n-levels for an LRU tree, each specified tree being individually analyzed within the LRU algorithm. Within each LRU tree level comprises the associativity line value can be further broken down into sub-analysis groups of any desired configuration, however, the total sub-analysis group configuration must equal the specified cache associativity line value. | 08-13-2009 |
20090204768 | ADAPTIVE CACHE SIZING - A runtime code manipulation system is provided that supports code transformations on a program while it executes. The runtime code manipulation system uses code caching technology to provide efficient and comprehensive manipulation of an application running on an operating system and hardware. The code cache includes a system for automatically keeping the code cache at an appropriate size for the current working set of an application running. | 08-13-2009 |
20090216955 | METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR LRU COMPARTMENT CAPTURE - A two pipe pass method for least recently used (LRU) compartment capture in a multiprocessor system. The method includes receiving a fetch request via a requesting processor and accessing a cache directory based on the received fetch request, performing a first pipe pass by determining whether a fetch hit or a fetch miss has occurred in the cache directory, and determining an LRU compartment associated with a specified congruence class of the cache directory based on the fetch request received, when it is determined that a fetch miss has occurred, and performing a second pipe pass by using the LRU compartment determined and the specified congruence class to access the cache directory and to select an LRU address to be cast out of the cache directory. | 08-27-2009 |
20100077153 | Optimal Cache Management Scheme - Computer implemented method, system and computer usable program code for cache management. A cache is provided, wherein the cache is viewed as a sorted array of data elements, wherein a top position of the array is a most recently used position of the array and a bottom position of the array is a least recently used position of the array. A memory access sequence is provided, and a training operation is performed with respect to a memory access of the memory access sequence to determine a type of memory access operation to be performed with respect to the memory access. Responsive to a result of the training operation, a cache replacement operation is performed using the determined memory access operation with respect to the memory access. | 03-25-2010 |
20100122035 | SPIRAL CACHE MEMORY AND METHOD OF OPERATING A SPIRAL CACHE - A spiral cache memory provides reduction in access latency for frequently-accessed values by self-organizing to always move a requested value to a front-most central storage element of the spiral. The occupant of the central location is swapped backward, which continues backward through the spiral until an empty location is swapped-to, or the last displaced value is cast out of the last location in the spiral. The elements in the spiral may be cache memories or single elements. The resulting cache memory is self-organizing and for the one-dimensional implementation has a worst-case access time proportional to N, where N is the number of tiles in the spiral. A k-dimensional spiral cache has a worst-case access time proportional to N | 05-13-2010 |
20100146213 | Data Cache Processing Method, System And Data Cache Apparatus - A data cache processing method, system and a data cache apparatus. The method includes: configuring a node and a memory chunk corresponding to the node in a cache, the node storing a key of data, length of the data and a pointer pointing to the memory chunk, the memory chunk storing data; and performing cache processing for the data according to the node and the memory chunk corresponding to the node. | 06-10-2010 |
20100146214 | METHOD AND SYSTEM FOR EFFICIENT CACHE LOCKING MECHANISM - Systems and methods for the implementation of more efficient cache locking mechanisms are disclosed. These systems and methods may alleviate the need to present both a virtual address (VA) and a physical address (PA) to a cache mechanism. A translation table is utilized to store both the address and the locking information associated with a virtual address, and this locking information is passed to the cache along with the address of the data. The cache can then lock data based on this information. Additionally, this locking information may be used to override the replacement mechanism used with the cache, thus keeping locked data in the cache. The translation table may also store translation table lock information such that entries in the translation table are locked as well. | 06-10-2010 |
20100153652 | CACHE MANAGEMENT SYSTEM - Embodiments disclosed herein provide a cache management system comprising a cache and a cache manager that can poll cached assets at different frequencies based on their relative activity status and independent of other applications. In one embodiment, the cache manager may maintain one or more lists, each corresponding to a polling layer associated with a particular polling schedule or frequency. Cached assets may be added to or removed from a list or they may be promoted or demoted to a different list, thereby changing their polling frequency. By polling less active files at a lower frequency than more active files, significant system resources can be saved, thereby increasing overall system speed and performance. Additionally, because a cache manager according to embodiments disclosed herein does not require detailed contextual information about the files that it is managing, such a cache manager can be easily implemented with any cache. | 06-17-2010 |
20100217937 | Data processing apparatus and method - A data processing apparatus is described which comprises a processor operable to execute a sequence of instructions and a cache memory having a plurality of cache lines operable to store data values for access by the processor when executing the sequence of instructions. A cache controller is also provided which comprises preload circuitry operable in response to a streaming preload instruction received at the processor to store data values from a main memory into one or more cache lines of the cache memory. The cache controller also comprises identification circuitry operable in response to the streaming preload instruction to identify one or more cache lines of the cache memory for preferential reuse. The cache controller also comprises cache maintenance circuitry operable to implement a cache maintenance operation during which selection of one or more cache lines for reuse is performed having regard to any preferred for reuse identification generated by the identification circuitry for cache lines of the cache memory. In this way, a single streaming preload instruction can be used to trigger both a preload of one or more cache lines of data values into the cache memory, and also to mark for preferential reuse another one or more cache lines of the cache memory. | 08-26-2010 |
20100235585 | DATA CACHING IN CONSOLIDATED NETWORK REPOSITORY - System(s) and method(s) are provided for caching data in a consolidated network repository of information available to mobile and non-mobile networks, and network management systems. Data can be cached in response to request(s) for a data element or request(s) for an update to a data element and in accordance with a cache retention protocol that establishes a versioning protocol and a set of timers that determine a period to elapse prior to removal of a version of the cached data element. Updates to a cached data element can be effected if an integrity assessment determines that recordation of an updated version of the data element preserves operational integrity of one or more network components or services. The assessment is based on integrity logic that establishes a set of rules that evaluate operational integrity of a requested update to a data element. Retention protocol and integrity logic are configurable. | 09-16-2010 |
20100250858 | Systems and Methods for Controlling Initialization of a Fingerprint Cache for Data Deduplication - A computer-implemented method for controlling initialization of a fingerprint cache for data deduplication associated with a single-instance-storage computing subsystem may comprise: 1) detecting a request to store a data selection to the single-instance-storage computing subsystem, 2) leveraging a client-side fingerprint cache associated with a previous storage of the data selection to the single-instance-storage computing subsystem to initialize a new client-side fingerprint cache, and 3) utilizing the new client-side fingerprint cache for data deduplication associated with the request to store the data selection to the single-instance-storage computing subsystem. Other exemplary methods of controlling initialization of a fingerprint cache for data deduplication, as well as corresponding exemplary systems and computer-readable-storage media, are also disclosed. | 09-30-2010 |
20100274974 | SYSTEM AND METHOD FOR REPLACING DATA IN A CACHE - A system and method for replacing data in a cache utilizes cache block validity information, which contains information that indicates that data in a cache block is no longer needed for processing, to maintain least recently used information of cache blocks in a cache set of the cache, identifies the least recently used cache block of the cache set using the least recently used information of the cache blocks in the cache set, and replaces data in the least recently used cache block of the cache set with data from main memory. | 10-28-2010 |
20100293337 | SYSTEMS AND METHODS OF TIERED CACHING - The disclosure is related to data storage systems having multiple cache and to management of cache activity in data storage systems having multiple cache. In a particular embodiment, a data storage device includes a volatile memory having a first read cache and a first write cache, a non-volatile memory having a second read cache and a second write cache and a controller coupled to the volatile memory and the non-volatile memory. The memory can be configured to selectively transfer read data from the first read cache to the second read cache based on a least recently used indicator of the read data and selectively transfer write data from the first write cache to the second write cache based on a least recently written indicator of the write data. | 11-18-2010 |
20100293338 | CACHE CLEANUP AND LATCHING - A low priority queue can be configured to list low priority removal candidates to be removed from a cache, with the low priority removal candidates being sorted in an order of priority for removal. A high priority queue can be configured to list high priority removal candidates to be removed from the cache. In response to receiving a request for one or more candidates for removal from the cache, one or more high priority removal candidates from the high priority queue can be returned if the high priority queue lists any high priority removal candidates. If no more high priority removal candidates remain in the high priority queue, then one or more low priority removal candidates from the low priority queue can be returned in the order of priority for removal. Write-only latches can also be used during write operations in a cache lookup data structure. | 11-18-2010 |
20100318744 | DIFFERENTIAL CACHING MECHANISM BASED ON MEDIA I/O SPEED - A method for allocating space in a cache based on media I/O speed is disclosed herein. In certain embodiments, such a method may include storing, in a read cache, cache entries associated with faster-responding storage devices and cache entries associated with slower-responding storage devices. The method may further include implementing an eviction policy in the read cache. This eviction policy may include demoting, from the read cache, the cache entries of faster-responding storage devices faster than the cache entries of slower-responding storage devices, all other variables being equal. In certain embodiments, the eviction policy may further include demoting, from the read cache, cache entries having a lower read-hit ratio faster than cache entries having a higher read-hit ratio, all other variables being equal. A corresponding computer program product and apparatus are also disclosed and claimed herein. | 12-16-2010 |
20100318745 | Dynamic Content Caching and Retrieval - This disclosure provides techniques for dynamic content caching and retrieval. For example, a computing device includes cache memory dedicated to temporarily caching data of one or more applications of the computing device. The computing device also includes storage memory to store data in response to requests by the applications. The storage memory may also temporarily cache data. Further, the computing device includes system software to represent to the applications of the computing device that the portions of the storage memory utilized to cache content are available to store data of the applications. In addition, the computing device includes application programming interfaces to provide content to a requesting application from a cache of the computing device and/or from a remote content source. | 12-16-2010 |
20100325365 | SECTORED CACHE REPLACEMENT ALGORITHM FOR REDUCING MEMORY WRITEBACKS - An improved sectored cache replacement algorithm is implemented via a method and computer program product. The method and computer program product select a cache sector among a plurality of cache sectors for replacement in a computer system. The method may comprise selecting a cache sector to be replaced that is not the most recently used and that has the least amount of modified data. In the case in which there is a tie among cache sectors, the sector to be replaced may be the sector among such cache sectors with the least amount of valid data. In the case in which there is still a tie among cache sectors, the sector to be replaced may be randomly selected among such cache sectors. Unlike conventional sectored cache replacement algorithms, the improved algorithm implemented by the method and computer program product accounts for both hit rate and bus utilization. | 12-23-2010 |
20110022805 | Wait-Free Parallel Data Cache - A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue. | 01-27-2011 |
20110072218 | PREFETCH PROMOTION MECHANISM TO REDUCE CACHE POLLUTION - A processor is disclosed. The processor includes an execution core, a cache memory, and a prefetcher coupled to the cache memory. The prefetcher is configured to fetch a first cache line from a lower level memory and to load the cache line into the cache. The cache is further configured to designate the cache line as a most recently used (MRU) cache line responsive to the execution core asserting N demand requests for the cache line, wherein N is an integer greater than 1. The cache is configured to inhibit the cache line from being promoted to the MRU position if it receives fewer than N demand requests. | 03-24-2011 |
20110087845 | BURST-BASED CACHE DEAD BLOCK PREDICTION - The present disclosure generally relates to cache memory systems and/or techniques to identify dead cache blocks in cache memory systems. Example systems may include a cache memory that is accessible by a cache client. The cache memory may include a plurality of storage locations for a first cache block, with a most recently used position location in the cache memory. A cache controller may be configured to predict whether the first cache block stored in the cache memory is identified as a dead cache block based on a cache burst of the first cache block. The cache burst may comprise a first access of the first cache block by a cache client and any subsequent contiguous accesses of the first cache block following the first access by the cache client while the first cache block is in a most recently used position of the cache set. | 04-14-2011 |
20110125972 | INFORMATION RECORDING DEVICE AND INFORMATION RECORDING METHOD - In one embodiment, there is provided an information recording device that includes: a plurality of cache buffers on which writing is performed in response to an external write request; and a controller configured to determine, according to an LRU algorithm, which of the cache buffers writing should be performed on. If a range of the write request does not overlap with any of cache ranges of the cache buffers and if an effective range of one of the cache buffers includes the end of its cache range, the controller preferentially uses the one of the cache buffers whose effective range includes the end of its cache range, instead of a cache buffer candidate that is determined according to the LRU algorithm. | 05-26-2011 |
20110153953 | SYSTEMS AND METHODS FOR MANAGING LARGE CACHE SERVICES IN A MULTI-CORE SYSTEM - A multi-core system that includes a 64-bit cache storage and a 32-bit memory storage that stores a 32-bit cache object directory. One or more cache engines execute on cores of the multi-core system to retrieve objects from the 64-bit cache, create cache directory objects, insert the created cache directory object into the cache object directory, and search for cache directory objects in the cache object directory. When an object is stored in the 64-bit cache, a cache engine can create a cache directory object that corresponds to the cached object and can insert the created cache directory object into an instance of a cache object directory. A second cache engine can receive a request to access the cached object and can identify a cache directory object in the instance of the cache object directory, using a hash key calculated based on one or more attributes of the cached object. | 06-23-2011 |
20110161598 | DUAL TIMEOUT CACHING - Embodiments of the present invention provide a method, system and computer program product for dual timer fragment caching. In an embodiment of the invention, a dual timer fragment caching method can include establishing both a soft timeout and also a hard timeout for each fragment in a fragment cache. The method further can include managing the fragment cache by evicting fragments in the fragment cache subsequent to a lapsing of a corresponding hard timeout. The management of the fragment cache also can include responding to multiple requests by multiple requestors for a stale fragment in the fragment cache with a lapsed corresponding soft timeout by returning the stale fragment from the fragment cache to some of the requestors, by retrieving and returning a new form of the stale fragment to others of the requestors, and by replacing the stale fragment in the fragment cache with the new form of the stale fragment with a reset soft timeout and hard timeout. | 06-30-2011 |
20110219193 | PROCESSOR AND MEMORY CONTROL METHOD - A processor and a memory management method are provided. The processor includes a processor core, a cache which transceives data to/from the processor core via a single port, and stores the data accessed by the processor core, and a Scratch Pad Memory (SPM) which transceives the data to/from the processor core via at least one of a plurality of multi ports. | 09-08-2011 |
20110238920 | BOUNDING BOX PREFETCHER WITH REDUCED WARM-UP PENALTY ON MEMORY BLOCK CROSSINGS - A microprocessor includes a cache memory and a data prefetcher. The data prefetcher detects a pattern of memory accesses within a first memory block and prefetch into the cache memory cache lines from the first memory block based on the pattern. The data prefetcher also observes a new memory access request to a second memory block. The data prefetcher also determines that the first memory block is virtually adjacent to the second memory block and that the pattern, when continued from the first memory block to the second memory block, predicts an access to a cache line implicated by the new request within the second memory block. The data prefetcher also responsively prefetches into the cache memory cache lines from the second memory block based on the pattern. | 09-29-2011 |
20110289277 | INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHODS AND PROGRAMS - The present invention obtains with high precision, in a storage system, the effect of additional installation or removal of cache memory, that is, the change of the cache hit rate and the performance of the storage system at that time. For achieving this, when executing normal cache control in the operational environment of the storage system, the cache hit rate when the cache memory capacity has changed is also obtained. Furthermore, with reference to the obtained cache hit rate, the peak performance of the storage system is obtained. Furthermore, with reference to the target performance, the cache memory and the number of disks and other resources that are additionally required are obtained. | 11-24-2011 |
20120011324 | SYSTEM AND METHOD FOR MANAGING LARGE FILESYSTEM-BASED CACHES - Embodiments disclosed herein utilize statistical approximations to manage large filesystem-based caches based on imperfect information. When removing entries from a large cache, which may have a million or more entries, the cache manager does not need to find the absolutely oldest entry that has been accessed the least recently. Instead, it suffices to find an entry that is older than most. In embodiments disclosed herein, statistical sampling of the cache is performed to produce models of different properties of the cache, including the number of entries, distribution of access times, distribution of entry sizes, etc. The models are then used to guide decisions that involve those properties. The size of the samples can be adjusted to balance the cost of acquiring the samples against the confidence level of the models produced by the samples. To achieve randomness, entries are stored using prefixes of addresses generated via a message-digest function. | 01-12-2012 |
20120017050 | LOCAL CACHE PROVIDING FAST CHANNEL CHANGE - Methods, systems, and apparatuses facilitate the processing of requests for media content, which can originate from a request by a user or device to change a channel. The media content for a subset of channels can be locally cached and fetched for quick retrieval. | 01-19-2012 |
20120072670 | METHOD FOR COUPLING SUB-LUN LOAD MEASURING METADATA SIZE TO STORAGE TIER UTILIZATION IN DYNAMIC STORAGE TIERING - A method for metadata management in a storage system configured for supporting sub-LUN tiering. The method may comprise providing a metadata queue of a specific size; determining whether the metadata for a particular sub-LUN is cached in the metadata queue; updating the metadata for the particular sub-LUN when the metadata for the particular sub-LUN is cached in the metadata queue; inserting the metadata for the particular sub-LUN to the metadata queue when the metadata queue is not full and the metadata is not cached; replacing an entry in the metadata queue with the metadata for the particular sub-LUN when the metadata queue is full and the metadata is not cached; and identifying at least one frequently accessed sub-LUN for moving to a higher performing tier in the storage system, the at least one frequently accessed sub-LUN being identified based on the metadata cached in the metadata queue. | 03-22-2012 |
20120072671 | PREFETCH STREAM FILTER WITH FIFO ALLOCATION AND STREAM DIRECTION PREDICTION - A prefetch filter receives a memory read request having an associated address for accessing data that is stored in a line of memory. An address window is determined that has an address range that encompasses an address space that is twice as large as the line of memory. In response to a determination of in which half the address window includes the requested line of memory, a prefetch direction is to a first direction or to an opposite direction. The prefetch filter can include an array of slots for storing a portion of a next predicted access and determine a memory stream in response to a hit on the array by a subsequent memory request. The prefetch filter FIFO counter cycles through the slots of the array before wrapping around to a first slot of the array for storing a next predicted address portion. | 03-22-2012 |
20120084514 | LOCKING A CACHE LINE FOR WRITE OPERATIONS ON A BUS - Provided are a computer program product, system, and method for locking a cache line for a burst write operations on a bus. A cache line is allocated in a cache for a target address. A lock is set for the cache line, wherein setting the lock prevents the data in the cache line from being cast out. Data is written to the cache line. All the data in the cache line is flushed to the target address over a bus in response to completing writing to the cache line. | 04-05-2012 |
20120084515 | CACHE MEMORY CONTROLLER AND METHOD FOR REPLACING A CACHE BLOCK - The present disclosure relates to a cache memory controller for controlling a set-associative cache memory, in which two or more blocks are arranged in the same set, the cache memory controller including a content modification status monitoring unit for monitoring whether some of the blocks arranged in the same set of the cache memory have been modified in contents, and a cache block replacing unit for replacing a block, which has not been modified in contents, if some of the blocks arranged in the same set have been modified in contents. | 04-05-2012 |
20120089784 | Lock Amortization In A Data Counter - An apparatus and a method for providing amortized lock access in a data container is described. Each access from each thread of a process in a memory to each object of a data container in the memory is recorded in a queue of the data container. A queue manager determines whether the recorded number of accesses in the queue has reached a predetermined threshold. The queue manager executes a lock algorithm and an eviction algorithm on all objects in the data container when the recorded number of accesses in the queue has reached the predetermined threshold. The lock algorithm is configured to lock objects in the data container while the eviction algorithm is performed on the data container. The eviction algorithm is configured to evict one or more objects from the data container pursuant to the eviction algorithm. | 04-12-2012 |
20120096226 | TWO LEVEL REPLACEMENT SCHEME OPTIMIZES FOR PERFORMANCE, POWER, AND AREA - A two-level replacement scheme is provided for selecting an entry in a cache memory to replace when a cache miss takes place and the memory is full. The scheme divides the tags associated with each memory location of the cache into two or more groups, each group relating to a subset of memory locations of the cache. The scheme uses a first algorithm to select one of the groups and passes the tags for the group through a second algorithm. The second algorithm produces a local index which, when combined with a group index, produces a replacement index that identifies a memory location in the cache to replace. | 04-19-2012 |
20120117328 | Managing a Storage Cache Utilizing Externally Assigned Cache Priority Tags - A method for caching data in a storage medium implementing tiered data structures may include storing a first portion of critical data at the instruction of a storage control module. The first portion of critical data may be separated into data having different priority levels based upon at least one data utilization characteristic associated with a file system implemented by the storage control module. The method may also include storing a second portion of data at the instruction of the storage control module. The second storage medium may have at least one performance, reliability, or security characteristic different from the first storage medium. | 05-10-2012 |
20120117329 | COMBINATION BASED LRU CACHING - Combination based LRU caching employs a mapping mechanism in an LRU cache separate from a set of LRU caches for storing the values used in the combinations. The mapping mechanism is used to track the valid combinations of the values in the LRU caches storing the values resulting in any given value being stored at most once. Through the addition of a byte pointer significantly more combinations may be tracked in the same amount of cache memory with full LRU semantics on both the values and combinations. | 05-10-2012 |
20120124295 | METHODS AND STRUCTURE FOR DETERMINING CACHE SIZE IN A STORAGE SYSTEM - Methods and structure for automated determination and reconfiguration of the size of a cache memory in a storage system. Features and aspects hereof generate historical information regarding frequency of hits on cache lines in the cache memory. The history maintained is then analyzed to determine a desired cache memory size. The historical information regarding cache memory usage may be communicated to a user who may then direct the storage system to reconfigure its cache memory to a desired cache memory size. In other embodiments, the storage system may automatically determine the desired cache memory size and reconfigure its cache memory. The method may be performed automatically periodically, and/or in response to a user's request, and/or in response to detecting thrashing caused by least recently used (LRU) cache replacement algorithms in the storage system. | 05-17-2012 |
20120124296 | METHOD AND APPARATUS FOR REACQUIRING LINES IN A CACHE - A method and apparatus for controlling re-acquiring lines of memory in a cache is provided. The method comprises storing at least one atomic instruction in a queue in response to the atomic instruction being retired, and identifying a target memory location associated with load and store portions of the atomic instruction. A line of memory associated with the target memory location is acquired and stored in a cache. Subsequently, if the line of acquired memory is evicted, then it is re-acquired in response to the atomic instruction becoming the oldest instruction stored in the queue. The apparatus comprises a queue and a cache. The queue is adapted for storing at least one atomic instruction in response to the atomic instruction being retired. A target memory location is associated with load and store portions of the atomic instruction. The cache is adapted for acquiring a line of memory associated with the target memory location, storing the line of acquired memory in a cache, evicting the acquired line of memory from the cache in response to detecting a conflict regarding the acquired line of memory, and re-acquiring the line of memory in response to the atomic instruction becoming the oldest instruction stored in the queue. | 05-17-2012 |
20120215986 | Wait-Free Parallel Data Cache - A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least- recently-used queue. | 08-23-2012 |
20120246412 | CACHE SYSTEM AND PROCESSING APPARATUS - According to an embodiment, in a cache system, the sequence storage stores sequence data in association with each piece of data to be stored in the volatile cache memory in accordance with the number of pieces of data stored in the nonvolatile cache memory that have been unused for a longer period of time than the data stored in the volatile cache memory or the number of pieces of data stored in the nonvolatile cache memory that have been unused for a shorter period of time than the data stored in the volatile cache memory. The controller causes the first piece of data to be stored in the nonvolatile cache memory in a case where it can be determined that the first piece of data has been unused for a shorter period of time than any piece of the data stored in the nonvolatile cache memory. | 09-27-2012 |
20120254549 | Non-Volatile Memory System Allowing Reverse Eviction of Data Updates to Non-Volatile Binary Cache - A non-volatile memory system includes a memory section having a non-volatile cache portion storing data in a binary format, a primary user data storage section that stores user data in multi-state format, and an update memory area where the memory system stores data updating user data previously stored in the primary user data. The memory system allows a maximum number of blocks for use in the update memory area. When the memory system receives updated data corresponding to user data already written into the primary user data storage section, it determines whether a block of memory is available in the update memory area. In response to determining that a block of memory is not available in the update memory area, the system determines a block of the update memory to remove from the update memory; copies the data content of the determined update block into the cache portion of the memory section; and subsequently writes the updated data into the update memory. | 10-04-2012 |
20120272010 | STABLE ADAPTIVE REPLACEMENT CACHE PROCESSING - A process for caching data in a cache memory includes upon detecting that a first page is in a first or second list, the first page is moved to a most recently used (MRU) position in the second list. Upon detecting that the first page is in a first history list, a first target size is updated to a second target size for the first and second lists, the first page is moved from the first history list to the MRU position in the second list, and the first page is fetched to the cache memory. Upon detecting that the first page is in a second history list, the second target size is updated to a third target size for the first and second lists, and the first page is moved from the second history list to the MRU position in the second list. | 10-25-2012 |
20120290795 | DATA CACHING IN CONSOLIDATED NETWORK REPOSITORY - System(s) and method(s) are provided for caching data in a consolidated network repository of information available to mobile and non-mobile networks, and network management systems. Data can be cached in response to request(s) for a data element or request(s) for an update to a data element and in accordance with a cache retention protocol that establishes a versioning protocol and a set of timers that determine a period to elapse prior to removal of a version of the cached data element. Updates to a cached data element can be effected if an integrity assessment determines that recordation of an updated version of the data element preserves operational integrity of one or more network components or services. The assessment is based on integrity logic that establishes a set of rules that evaluate operational integrity of a requested update to a data element. Retention protocol and integrity logic are configurable. | 11-15-2012 |
20120303904 | MANAGING UNMODIFIED TRACKS MAINTAINED IN BOTH A FIRST CACHE AND A SECOND CACHE - Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache. | 11-29-2012 |
20120303905 | METHOD AND APPARATUS FOR IMPLEMENTING CACHE - In embodiments of the present invention, a file access request sent by an application to a hard disk is obtained, file information of the accessed file is acquired according to the request, the file accessed by the application is fragmented to obtain at least one file fragment, a condition for copying the file fragment from the hard disk to the cache is set, and the file fragment is copied to the cache when the copying condition is met in a storage unit. Compared with a technical solution in the prior art where the file is copied to the cache, utilization efficiency of the cache is effectively improved. | 11-29-2012 |
20120324171 | Apparatus and Method to Copy Data - An apparatus and method for copying data are disclosed. A data track to be replicated using a peer-to-peer remote copy (PPRC) operation is identified. The data track is encoded in a non-transitory computer readable medium disposed in a first data storage system. At a first time, a determination of whether the data track is stored in a data cache is made. At a second time, the data track is replicated to a non-transitory computer readable medium disposed in a second data storage system. The second time is later than the first time. If the data track was stored in the data cache at the first time, a cache manager is instructed to not demote the data track from the data cache. If the data track was not stored in the data cache at the first time, the cache manager is instructed that the data track may be demoted. | 12-20-2012 |
20120324172 | Cache Replacement Using Active Cache Line Counters - An apparatus for performing data caching comprises at least one cache memory including multiple cache lines arranged into multiple segments, each segment having a subset of the cache lines associated therewith. The apparatus further includes a first plurality of counters, each of the counters being operative to track a number of active cache lines associated with a corresponding one of the segments. At least one controller included in the apparatus is operative to receive information relating to the number of active cache lines associated with a corresponding segment from the first plurality of counters and to implement a cache segment replacement policy for determining which of the segments to replace as a function of at least the information relating to the number of active cache lines associated with a corresponding segment. | 12-20-2012 |
20130007373 | REGION BASED CACHE REPLACEMENT POLICY UTILIZING USAGE INFORMATION - A method, apparatus, and system for replacing at least one cache region selected from a plurality of cache regions, wherein each of the regions is composed of a plurality of blocks is disclosed. The method includes applying a first algorithm to the plurality of cache regions to limit the number of potential candidate regions to a preset value, wherein the first algorithm assesses the ability of a region to be replaced based on properties of the plurality of blocks associated with that region; and designating at least one of the limited potential candidate regions as a victim based region level information associated with each of the limited potential candidate regions. | 01-03-2013 |
20130013866 | SPATIAL LOCALITY MONITOR - A method includes updating a first tag access indicator of a storage structure. The tag access indicator indicates a number of accesses by a first thread executing on a processor to a memory resource for a portion of memory associated with a memory tag. The updating is in response to an access to the memory resource for a memory request associated with the first thread to the portion of memory associated with the memory tag. The method may include updating a first sum indicator of the storage structure indicating a sum of numbers of accesses to the memory resource being associated with a first access indicator of the storage structure for the first thread, the updating being in response to the access to the memory resource. | 01-10-2013 |
20130024624 | PREFETCHING TRACKS USING MULTIPLE CACHES - Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request. | 01-24-2013 |
20130024625 | PREFETCHING TRACKS USING MULTIPLE CACHES - Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request. | 01-24-2013 |
20130073809 | DYNAMICALLY ALTERING TIME TO LIVE VALUES IN A DATA CACHE - A TTL value for a data object stored in-memory in a data grid is dynamically adjusted. A stale data tolerance policy is set. Low toleration for staleness would mean that eviction is certain, no matter the cost, and high toleration would mean that the TTL value would be set based on total cost. Metrics to report a cost to re-create and re-store the data object are calculated, and the TTL value is adjusted based on calculated metrics. Further factors, such as, cleanup time to evict data from a storage site, may be considered in the total cost. | 03-21-2013 |
20130111146 | SELECTIVE POPULATION OF SECONDARY CACHE EMPLOYING HEAT METRICS | 05-02-2013 |
20130173863 | Memory Management Among Levels Of Cache In A Memory Hierarchy - Methods, apparatus, and product for memory management among levels of cache in a memory hierarchy in a computer with a processor operatively coupled through two or more levels of cache to a main random access memory, caches closer to the processor in the hierarchy characterized as higher in the hierarchy, including: identifying a line in a first cache that is preferably retained in the first cache, the first cache backed up by at least one cache lower in the memory hierarchy, the lower cache implementing an LRU-type cache line replacement policy; and updating LRU information for the lower cache to indicate that the line has been recently accessed. | 07-04-2013 |
20130179640 | INSTRUCTION CACHE POWER REDUCTION - In one embodiment, a method for controlling an instruction cache including a least-recently-used bits array, a tag array, and a data array, includes looking up, in the least-recently-used bits array, least-recently-used bits for each of a plurality of cacheline sets in the instruction cache, determining a most-recently-used way in a designated cacheline set of the plurality of cacheline sets based on the least-recently-used bits for the designated cacheline, looking up, in the tag array, tags for one or more ways in the designated cacheline set, looking up, in the data array, data stored in the most-recently-used way in the designated cacheline set, and if there is a cache hit in the most-recently-used way, retrieving the data stored in the most-recently-used way from the data array. | 07-11-2013 |
20130179641 | MEMORY SYSTEM INCLUDING A SPIRAL CACHE - An integrated memory system with a spiral cache responds to requests for values at a first external interface coupled to a particular storage location in the cache in a time period determined by the proximity of the requested values to the particular storage location. The cache supports multiple outstanding in-flight requests directed to the same address using an issue table that tracks multiple outstanding requests and control logic that applies the multiple requests to the same address in the order received by the cache memory. The cache also includes a backing store request table that tracks push-back write operations issued from the cache memory when the cache memory is full and a new value is provided from the external interface, and the control logic to prevent multiple copies of the same value from being loaded into the cache or a copy being loaded before a pending push-back has been completed. | 07-11-2013 |
20130185513 | CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE - In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application. | 07-18-2013 |
20130185514 | CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE - In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application. | 07-18-2013 |
20130191599 | CACHE SET REPLACEMENT ORDER BASED ON TEMPORAL SET RECORDING - A technique is provided for cache management of a cache. The processing circuit determines a miss count and a hit position field during a previous execution of an instruction requesting that a data element be stored in a cache. The miss count and the hit position field are stored for a data element corresponding to an instruction that requests storage of the data element. The processing circuit places the data element in a hierarchical order based on the miss count and/or the hit position field. The hit position field includes a hierarchical position related to the data element in the cache. | 07-25-2013 |
20130191600 | COMBINED CACHE INJECT AND LOCK OPERATION - A circuit arrangement and method utilize cache injection logic to perform a cache inject and lock operation to inject a cache line in a cache memory and automatically lock the cache line in the cache memory in parallel with communication of the cache line to a main memory. The cache injection logic may additionally limit the maximum number of locked cache lines that may be stored in the cache memory, e.g., by aborting a cache inject and lock operation, injecting the cache line without locking, or unlocking and/or evicting another cache line in the cache memory. | 07-25-2013 |
20130219125 | CACHE EMPLOYING MULTIPLE PAGE REPLACEMENT ALGORITHMS - The present invention extends to methods, systems, and computer program products for implementing a cache using multiple page replacement algorithms. An exemplary cache can include two logical portions where the first portion implements the least recently used (LRU) algorithm and the second portion implements the least recently used two (LRU2) algorithm to perform page replacement within the respective portion. By implementing multiple algorithms, a more efficient cache can be implemented where the pages most likely to be accessed again are retained in the cache. Multiple page replacement algorithms can be used in any cache including an operating system cache for caching pages accessed via buffered I/O, as well as a cache for caching pages accessed via unbuffered I/O such as accesses to virtual disks made by virtual machines. | 08-22-2013 |
20130219126 | Wait-Free Parallel Data Cache - A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue. | 08-22-2013 |
20130262777 | DATA CACHE BLOCK DEALLOCATE REQUESTS - A data processing system includes a processor core supported by upper and lower level caches. In response to executing a deallocate instruction in the processor core, a deallocation request is sent from the processor core to the lower level cache, the deallocation request specifying a target address associated with a target cache line. In response to receipt of the deallocation request at the lower level cache, a determination is made if the target address hits in the lower level cache. In response to determining that the target address hits in the lower level cache, the target cache line is retained in a data array of the lower level cache and a replacement order field in a directory of the lower level cache is updated such that the target cache line is more likely to be evicted from the lower level cache in response to a subsequent cache miss. | 10-03-2013 |
20130262778 | DATA CACHE BLOCK DEALLOCATE REQUESTS IN A MULTI-LEVEL CACHE HIERARCHY - In response to executing a deallocate instruction, a deallocation request specifying a target address of a target cache line is sent from a processor core to a lower level cache. In response, a determination is made if the target address hits in the lower level cache. If so, the target cache line is retained in a data array of the lower level cache, and a replacement order field of the lower level cache is updated such that the target cache line is more likely to be evicted in response to a subsequent cache miss in a congruence class including the target cache line. In response to the subsequent cache miss, the target cache line is cast out to the lower level cache with an indication that the target cache line was a target of a previous deallocation request of the processor core. | 10-03-2013 |
20130297885 | ENHANCING DATA CACHING PERFORMANCE - For a cache in which a plurality of frequently accessed data segments are temporarily stored, reference count information of the plurality of data segments, in conjunction with least recently used (LRU) information, is used to determine a length of time to retain the plurality of data segments in the cache according to a predetermined weight, where notwithstanding the LRU information, those of the plurality of data segments having a higher reference counts are retained longer than those having lower reference counts. | 11-07-2013 |
20130297886 | ENHANCING DATA CACHING PERFORMANCE - For a cache in which a plurality of frequently accessed data segments are temporarily stored, reference count information of the plurality of data segments, in conjunction with least recently used (LRU) information, is used to determine a length of time to retain the plurality of data segments in the cache according to a predetermined weight, where notwithstanding the LRU information, those of the plurality of data segments having a higher reference counts are retained longer than those having lower reference counts. | 11-07-2013 |
20130311724 | CACHE SYSTEM WITH BIASED CACHE LINE REPLACEMENT POLICY AND METHOD THEREFOR - A cache system includes plurality of first caches at a first level of a cache hierarchy and a second cache at a second level of the cache hierarchy which is lower than the first level of cache hierarchy coupled to each of the plurality of first caches. The second cache enforces a cache line replacement policy in which the second cache selects a cache line for replacement based in part on whether the cache line is present in any of the plurality of first caches and in part on another factor. | 11-21-2013 |
20130339624 | PROCESSOR, INFORMATION PROCESSING DEVICE, AND CONTROL METHOD FOR PROCESSOR - A processor is connected to a main storage device and includes a cache memory unit, a tag memory unit, a main storage control unit, a cache control unit, a main storage access monitoring unit, a cache access monitoring unit, and a swap control unit. The cache memory unit includes a plurality of cache lines. The tag memory unit includes a plurality of tags. The main storage control unit accesses the main storage device. The cache control unit accesses the cache memory unit. The main storage access monitoring unit monitors a first access frequency. The cache access monitoring unit monitors a second access frequency. The swap control unit allows the cache control unit to retain data in the main storage device based on the first access frequency, the second access frequency, and state information retained in a tag. | 12-19-2013 |
20130346701 | REPLACEMENT METHOD AND APPARATUS FOR CACHE - Embodiments of the present invention provide a data replacement method and apparatus for a Cache, and the method includes: obtaining a working state of a Cache, determining whether a case that burst traffic of data input into the Cache is excessively heavy occurs; and if the case that the burst traffic of the data is excessively heavy occurs, determining a data replacement period that is adaptive to the burst traffic of the data, so that the Cache replaces data stored in the Cache with the input data according to the data replacement period. In the embodiments of the present invention, in a manner of determining a data replacement period that is adaptive to current burst traffic of data, replacement of data stored in a Cache is delayed, occurrence of computer performance reduction is postponed, a duration of performance reduction is reduced, and furthermore a degree of performance reduction is reduced. | 12-26-2013 |
20130346702 | PROCESSOR AND CONTROL METHOD THEREOF - A processor includes a directory cache provided with a data cache, a memory directory to hold directory information, to hold dirty information indicating if held directory information is the same as that held in the memory directory, and local information indicating that the directory information of the memory directory is not held in a data cache of a different processor when the directory information from the memory directory is registered, and makes a setting such that the directory information of the memory directory is the same as directory information held in a data cache of a different processor as dirty information of the directory cache when the directory information of the directory cache and the local information of the directory cache indicate that the directory information of the memory directory is not held in the data cache of the different processor. | 12-26-2013 |
20140019689 | METHODS OF CACHE PRELOADING ON A PARTITION OR A CONTEXT SWITCH - A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown. | 01-16-2014 |
20140047190 | Location and Relocation of Data Within a Cache - In one embodiment, a computer system includes a cache having one or more memories and a metadata service. The metadata service is able to receive requests for data stored in the cache from a first client and from a second client. The metadata service is further able to determine whether the performance of the cache would be improved by relocating the data stored in the cache. The metadata service is further operable to relocate the data stored in the cache when such relocation would improve the performance of the cache. | 02-13-2014 |
20140047191 | SYSTEM AND METHOD OF CACHING INFORMATION - A system and method is provided wherein, in one aspect, a currently-requested item of information is stored in a cache based on whether it has been previously requested and, if so, the time of the previous request. If the item has not been previously requested, it may not be stored in the cache. If the subject item has been previously requested, it may or may not be cached based on a comparison of durations, namely (1) the duration of time between the current request and the previous request for the subject item and (2) for each other item in the cache, the duration of time between the current request and the previous request for the other item. If the duration associated with the subject item is less than the duration of another item in the cache, the subject item may be stored in the cache. | 02-13-2014 |
20140052925 | SYSTEM AND METHOD FOR WRITE-LIFE EXTENSION OF STORAGE RESOURCES - An information handling system includes a processor and a storage resource communicatively coupled to the processor. The processor is configured to determine if available overprovisioned storage of the storage resource is less than a threshold overprovisioned storage capacity, establish a new stated capacity for the storage resource in response to a determination that the available overprovisioned storage of the storage resource is less than the threshold overprovisioned storage capacity, and communicate to the processor an indication of the new stated capacity. | 02-20-2014 |
20140052926 | EFFICIENT MANAGEMENT OF COMPUTER MEMORY USING MEMORY PAGE ASSOCIATIONS AND MEMORY - A system for managing memory operations. The system includes a processor executing instructions that cause the processor to read a first memory page from a storage device responsive to a request for the first memory page and store the first memory page to system memory. Based on a pre-established set of association rules, one or more associated memory pages are identified that are related to the first memory page. The associated memory pages are read from the storage device and compressed to generate corresponding compressed associated memory pages. The compressed associated memory pages are also stored to the system memory to enable faster access to the associated memory pages during processing of the first memory page. The compressed associated memory pages are individually decompressed in response to the particular page being required for use during processing. | 02-20-2014 |
20140075125 | SYSTEM CACHE WITH CACHE HINT CONTROL - Methods and apparatuses for utilizing a cache hint mechanism in which a requesting agent can provide hints as to how data corresponding to a request should be cached in a system cache within a memory controller. The way the system cache responds to received requests is determined by the cache hint provided by the originating requesting agent. When a request is accompanied by a de-allocate cache hint, the system cache causes a cache line hit by the request to be de-allocated. For a request with a do not allocate cache hint, the system cache does not allocate a cache line if the request misses in the system cache, and the system cache maintains a given cache line in its current state if the requests hits the given cache line. | 03-13-2014 |
20140082296 | DEFERRED RE-MRU OPERATIONS TO REDUCE LOCK CONTENTION - Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis. A global lock for the set of operations is periodically acquired, the set of operations is performed, and the global lock is freed so as to avoid excessive duty cycling of lock and unlock operations in the computing storage environment. | 03-20-2014 |
20140082297 | CACHE COHERENCE DIRECTORY IN MULTI-PROCESSOR ARCHITECTURES - Technologies are generally described for a cache coherence directory in multi-processor architectures. In an example, a directory in a die may receive a request for a particular block. The directory may determine a block aging threshold relating to a likelihood that data blocks, including the particular data block, are stored in one or more caches in the die. The directory may further analyze a memory to identify a particular cache indicated as storing the particular data block and identify a number of cache misses for the particular cache. The directory may identify a time when an event occurred for the particular data block and determine whether to send the request for the particular data block to the particular cache based on the aging threshold, the time of the event, and the number of cache misses. | 03-20-2014 |
20140095802 | Caching Large Objects In A Computer System With Mixed Data Warehousing And Online Transaction Processing Workload - Techniques are provided for managing cached data objects in a mixed workload environment. In an embodiment, a database system receives request to access a target data object. The database system determines whether the request to access the target data object is associated with a first type of workload or a second type of workload. In response to determining that the request is associated with the first type of workload, the target data object replaces a least recently used data object in a cache. In response to determining that the request is associated with the second type of workload, the target data object is cached based on an associated access-level value. | 04-03-2014 |
20140101387 | OPPORTUNISTIC CACHE REPLACEMENT POLICY - A cache management system employs a replacement policy in a manner that manages concurrent accesses to cache. The cache management system comprises a cache, a replacement policy storage for storing replacement statuses of cache lines of the cache, and an update module. The update module, comprising access filtering and a concurrent update handling, determines how updates to the replacement policy storage are handled. In a multi-threaded compute environment, a concurrent access to shared cache causes a selective update to the replacement policy storage. | 04-10-2014 |
20140108737 | ZERO CYCLE CLOCK INVALIDATE OPERATION - A method to eliminate the delay of a block invalidate operation in a multi CPU environment by overlapping the block invalidate operation with normal CPU accesses, thus making the delay transparent. A range check is performed on each CPU access while a block invalidate operation is in progress, and an access that maps to within the address range of the block invalidate operation will be trated as a cache miss to ensure that the requesting CPU will receive valid data. | 04-17-2014 |
20140108738 | APPARATUS AND METHOD FOR DETECTING LARGE FLOW - An apparatus and method for detecting a large flow are provided. The method includes: storing flow information corresponding to the received flow in a cache entry; determining whether or not there is a possibility to be determined that the flow corresponding to the flow information stored in an entry to be deleted from a cache by storing the flow information in the cache entry is a large flow; restoring the entry to be deleted in the cache according to a result of the possibility determination; inspecting a packet count of the entry in which the flow information is stored; and determining that the flow corresponding to the flow information stored in the corresponding entry is the large flow, if the result of the packet count inspection is greater than or equal to a preset threshold value. | 04-17-2014 |
20140115260 | SYSTEM AND METHOD FOR PRIORITIZING DATA IN A CACHE - Implementations described and claimed herein provide a system and methods for prioritizing data in a cache. In one implementation, a priority level, such as critical, high, and normal, is assigned to cached data. The priority level dictates how long the data is cached and consequently, the order in which the data is evicted from the cache memory. Data assigned a priority level of critical will be resident in cache memory unless heavy memory pressure causes the system to reclaim memory and all data assigned a priority state of high or normal has been evicted. High priority data is cached longer than normal priority data, with normal priority data being evicted first. Accordingly, important data assigned a priority level of critical, such as a deduplication table, is kept resident in cache memory at the expense of other data, regardless of the frequency or recency of use of the data. | 04-24-2014 |
20140115261 | APPARATUS, SYSTEM AND METHOD FOR MANAGING A LEVEL-TWO CACHE OF A STORAGE APPLIANCE - Aspects of the present disclosure disclose systems and methods for managing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable. The data contained within the level-two cache is managing using a cache list that manages and/or maintains data chunk entries added to the level-two cache based on a temporal access of the data chunk. | 04-24-2014 |
20140115262 | CACHE CONTROL DEVICE AND CACHE CONTROL METHOD - A cache control device includes an area determination unit that determines an area of a cache memory which is allocated to each instruction flow on the basis of an allocation ratio of an execution time per unit time, which is allocated to each of a plurality of the instruction flows by a CPU. The area determination unit specifies the area allocated to the specified instruction flow in response to an access request from a memory access unit, and accesses the specified area in the cache memory. | 04-24-2014 |
20140129779 | CACHE REPLACEMENT POLICY FOR DATA WITH STRONG TEMPORAL LOCALITY - Various cache replacement policies are described whose goals are to identify items for eviction from the cache that are not accessed often and to identify items stored in the cache that are regularly accessed that should be maintained longer in the cache. In particular, the cache replacement policies are useful for workloads that have a strong temporal locality, that is, items that are accessed very frequently for a period of time and then quickly decay in terms of further accesses. In one embodiment, a variation on the traditional least recently used caching algorithm uses a reuse period or reuse distance for an accessed item to determine whether the item should be promoted in the cache queue. In one embodiment, a variation on the traditional two queue caching algorithm evicts items from the cache from both an active queue and an inactive queue. | 05-08-2014 |
20140143501 | DATABASE SEARCH FACILITY - A database cache manager for controlling a composition of a plurality of cache entries in a data cache is described. Each cache entry is a result of a query carried out on a database of data records, the cache manager being arranged to remove cache entries from the cache based on a cost of removal factor which is comprised of a time cost, the time cost being calculated from the amount of time taken to obtain a query result to which that cache entry is related. | 05-22-2014 |
20140156944 | MEMORY MANAGEMENT APPARATUS, METHOD, AND SYSTEM - The present invention discloses a memory management apparatus, method, and system. An OS-based memory management apparatus associated with main memory includes a memory allocation controller configured to control a first memory region within the main memory such that the first memory region is used as a buffer cache depending on whether an input/output device is active or not in order to use the first memory region, allowing memory reservation for the input/output device, in the OS. The memory allocation controller controls the first memory region such that the first memory region is used as an eviction-based cache. | 06-05-2014 |
20140164711 | Configuring a Cache Management Mechanism Based on Future Accesses in a Cache - The described embodiments include a cache controller that configures a cache management mechanism. In the described embodiments, the cache controller is configured to monitor at least one structure associated with a cache to determine at least one cache block that may be accessed during a future access in the cache. Based on the determination of the at least one cache block that may be accessed during a future access in the cache, the cache controller configures the cache management mechanism. | 06-12-2014 |
20140189247 | APPARATUS AND METHOD FOR IMPLEMENTING A SCRATCHPAD MEMORY - An apparatus and method for implementing a scratchpad memory within a cache using priority hints. For example, a method according to one embodiment comprises: providing a priority hint for a scratchpad memory implemented using a portion of a cache; determining a page replacement priority based on the priority hint; storing the page replacement priority in a page table entry (PTE) associated with the page; and using the page replacement priority to determine whether to evict one or more cache lines associated with the scratchpad memory from the cache. | 07-03-2014 |
20140189248 | EFFICIENT ONLINE CONSTRUCTION OF MISS RATE CURVES - Miss rate curves are constructed in a resource-efficient manner so that they can be constructed and memory management decisions can be made while the workloads are running. The resource-efficient technique includes the steps of selecting a subset of memory pages for the workload, maintaining a least recently used (LRU) data structure for the selected memory pages, detecting accesses to the selected memory pages and updating the LRU data structure in response to the detected accesses, and generating data for constructing a miss-rate curve for the workload using the LRU data structure. After a memory page is accessed, the memory page may be left untraced for a period of time, after which the memory page is retraced. | 07-03-2014 |
20140201458 | REDUCING CACHE MEMORY REQUIREMENTS FOR RECORDING STATISTICS FROM TESTING WITH A MULTIPLICITY OF FLOWS - A method reduces cache memory requirements for testing a multiplicity of flows. The method includes receiving data corresponding to a frame in a particular flow among the multiplicity of flows. In response to the frame received, the method updates a set of cached flow counters in cache memory for the particular flow. The method updates one or more regular operation counters and one or more conditional counters among the set of cached flow counters, including a last serviced counter. The method updates, responsive to any error conditions, one or more error condition counters among the set of cached flow counters. The method evaluates whether to transfer values from the cached flow counters to system accumulators in system memory using at least a value in the last serviced counter for the particular flow. Responsive to the evaluating, the method transfers the values from the cached flow counters to the system accumulators. | 07-17-2014 |
20140208037 | EXPIRING VIRTUAL CONTENT FROM A CHACHE IN A VIRTUAL UNIVERSE - Approaches for expiring cached virtual content in a virtual universe are provided. In one approach, there is an expiration tool, including an identification component configured to identify virtual content associated with an avatar in the virtual universe, an analysis component configured to analyze a behavior of the avatar in a region of the virtual universe, the behavior indicating a likely future location of the avatar, and an expiration component configured to expire cached virtual content associated with the avatar based on the behavior of the avatar in the region of the virtual universe, wherein the cached virtual content associated with the avatar in the future location is maintained in the cache longer than cached virtual content associated with the avatar in another region of the virtual universe. | 07-24-2014 |
20140208038 | SECTORED CACHE REPLACEMENT ALGORITHM FOR REDUCING MEMORY WRITEBACKS - A sectored cache replacement algorithm is implemented via a method and computer program product. The method and computer program product select a cache sector among a plurality of cache sectors for replacement in a computer system. The method may comprise selecting a cache sector to be replaced that is not the most recently used and that has the least amount of modified data. In the case in which there is a tie among cache sectors, the sector to be replaced may be the sector among such cache sectors with the least amount of valid data. In the case in which there is still a tie among cache sectors, the sector to be replaced may be randomly selected among such cache sectors. Unlike conventional sectored cache replacement algorithms, the algorithm implemented by the method and computer program product accounts for both hit rate and bus utilization. | 07-24-2014 |
20140215161 | BALANCED P-LRU TREE FOR A "MULTIPLE OF 3" NUMBER OF WAYS CACHE - In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing a balanced P-LRU tree for a “multiple of 3” number of ways cache. For example, in one embodiment, such means may include an integrated circuit having a cache and a plurality of ways. In such an embodiment the plurality of ways include a quantity that is a multiple of three and not a power of two, and further in which the plurality of ways are organized into a plurality of pairs. In such an embodiment, means further include a single bit for each of the plurality of pairs, in which each single bit is to operate as an intermediate level decision node representing the associated pair of ways and a root level decision node having exactly two individual bits to point to one of the single bits to operate as the intermediate level decision nodes representing an associated pair of ways. In this exemplary embodiment, the total number of bits is N−1, wherein N is the total number of ways in the plurality of ways. Alternative structures are also presented for full LRU implementation, a “multiple of 5” number of cache ways, and variations of the “multiple of 3” number of cache ways. | 07-31-2014 |
20140223106 | METHOD TO THROTTLE RATE OF DATA CACHING FOR IMPROVED I/O PERFORMANCE - A cache device for the caching of data and specifically for the identification of stale data or a thrashing event within the cache device is described. Further a cache device for the prioritization of cached data in the cache device during a thrashing event as well as stale cached data in the cache device are described. Methods associated with the use of the caching device for the caching of data and for the identification of data in a thrashing event or the identification of stale cached data are also described. | 08-07-2014 |
20140223107 | Cache Replacement Method and System - A system, computer readable medium and method for managing objects in a cache. The method includes receiving a request for a desired object that is not stored in the cache; determining, based on an admission policy, whether one or more segments of a least popular existing object need to be removed from the cache for admitting one or more segments of the desired object into the cache; removing, when there is no space in the cache for the desired object, the one or more segments of the least popular existing object from the cache based on a replacement policy, wherein the replacement policy includes a caching priority function for determining that the least popular existing object is the least popular object of all objects stored by the cache; and admitting at least one segment of the desired object into the cache. | 08-07-2014 |
20140237193 | CACHE WINDOW MANAGEMENT - A method of managing a plurality of least recently used (LRU) queues having entries that correspond to cached data includes ordering a first plurality of entries in a first queue according to a first recency of use of cached data. The first queue corresponds to a first priority. A second plurality of entries in a second queue are ordered according to a second recency of use of cached data. The second queue corresponds to a second priority. A first entry is selected in the first queue based on the order of the first plurality of entries in the first queue. A recency property associated with the first entry is compared with a recency property associated with a second entry in the second queue. Based on a result of this comparison, the first entry and the second entry may be swapped. | 08-21-2014 |
20140244937 | Read Ahead Tiered Local and Cloud Storage System and Method Thereof - A high tier storage area stores a stub file and a lower tier cloud storage area stores the file corresponding to the stub file. When a client apparatus requests segments of the file from the high tier storage area, reference is made to the stub file to determine a predicted non-sequential pattern of requests to the segments by the client apparatus. The high tier storage area follows the predicted non-sequential pattern of requests to retrieve the segments of the file from the cloud prior to the client apparatus actually requesting the segments. As such, the file may be efficiently provided to the client apparatus while also efficiently storing the file on the lower tier cloud storage area. | 08-28-2014 |
20140258639 | CLIENT SPATIAL LOCALITY THROUGH THE USE OF VIRTUAL REQUEST TRACKERS - A memory request optimizer includes a memory tracker for receiving a read request from client devices and for determining whether the request address matches any of the virtual request trackers. If the request address does not match any virtual request tracker, an allocation logic allocates a next available virtual request tracker to track the request address. When the request address matches a virtual request tracker, a prefetch logic increments a current tracker match count for the virtual request tracker and determines whether a linear history of tracker match counts indicates a prefetch of a next request data is appropriate based on one or more predetermined criteria. If the linear history indicates the prefetch is appropriate, the prefetch logic obtains the next request data at an address equal to the request address plus a preconfigured request offset from the memory sub-system and stores the next request data in a cache. | 09-11-2014 |
20140281265 | WRITE ADMITTANCE POLICY FOR A MEMORY CACHE - A method includes monitoring a number of read access requests to an address for data stored on a backing store. The method also includes comparing the number of read access requests to a read access threshold. The read access threshold includes a threshold number of read access requests for the address. The method also includes caching data corresponding to a write access request to the address in response to determining that the number of read access requests satisfies the read access threshold. | 09-18-2014 |
20140289477 | LIGHTWEIGHT PRIMARY CACHE REPLACEMENT SCHEME USING ASSOCIATED CACHE - One aspect provides an apparatus including: at least one processor; and a computer readable storage medium having computer readable program code embodied therewith and executable by the at least one processor, the computer readable program code including: computer readable program code configured to, responsive to a request for data and a miss in both a first cache and a second cache, retrieve the data from memory, the first cache storing at least a subset of data stored in the second cache; computer readable program code configured to infer from information available from the first cache a replacement entry in the second cache; and computer readable program code configured to, responsive to inferring from information available from the first cache a replacement entry in the second cache, replace an entry in the second cache with the data from memory. Other aspects are described and claimed. | 09-25-2014 |
20140297964 | STORAGE SYSTEM, STORAGE CONTROLLER, AND METHOD FOR MANAGING MAPPING BETWEEN LOCAL ADDRESS AND PHYSICAL ADDRESS - According to one embodiment, a mapping manager of a storage controller changes a first chunk from a second state to a third state if an access condition to the first chunk is a first condition that needs high speed access and the first chunk is in the second state. The second state is a state in which a first logical address of the first chunk is mapped to a first physical address in a second storage device slower and having a larger capacity than a first storage device. In the third state, the first logical address is mapped to a second physical address in the first storage device and also mapped to the first physical address in the second storage device. | 10-02-2014 |
20140304479 | GROUPING TRACKS FOR DESTAGING - Tracks are selected for destaging from a least recently used (LRU) list and the selected tracks are moved to a destaging wait list. The selected tracks are grouped and destaged from the destaging wait list. | 10-09-2014 |
20140317355 | CACHE ALLOCATION SCHEME OPTIMIZED FOR BROWSING APPLICATIONS - Methods and systems for cache allocation schemes optimized for browsing applications. A memory controller includes a memory cache for reducing the number of requests that access off-chip memory. When an idle screen use case is detected, the frame buffer is allocated to the memory cache using a sequential allocation mode. Pixels are allocated to indexes of a given way in a sequential fashion, and then each way is accessed in a sequential fashion. When a given way is being accessed, the other ways of the memory cache are put into retention mode to reduce the leakage power. | 10-23-2014 |
20140325160 | CACHING CIRCUIT WITH PREDETERMINED HASH TABLE ARRANGEMENT - Disclosed herein are an apparatus, an integrated circuit, and method to cache objects. At least one hash table of a circuit comprises a predetermined arrangement that maximizes cache memory space and minimizes a number of cache memory transactions. The circuit handles requests by a remote device to obtain or cache an object. | 10-30-2014 |
20140325161 | COLLABORATIVE CACHING - A method is provided for collaborative caching between a server cache ( | 10-30-2014 |
20140344525 | METHOD AND APPARATUS FOR MANAGING CACHE MEMORY IN COMMUNICATION SYSTEM - In the present invention, a base station determines from a communication system whether a first content, which is requested by a mobile terminal, is saved on a cache memory, attributes a predetermined priority ranking to the first content and saves same on the cache memory when the first content is not saved on the cache memory, and updates the priority ranking of the first content on the basis of a predicted popularity of the first content, wherein the predicted popularity is decided on the basis of change in the number of views of a content that corresponds to a category of the first content. | 11-20-2014 |
20140359228 | CACHE ALLOCATION IN A COMPUTERIZED SYSTEM - A computerized system comprises a solid state memory and a controller adapted to use the solid state memory as a cache for the computerized system. The controller is adapted to add or to remove a chunk of data from the cache based on a detected frequency of occurrence of the chunk of data in the computerized system. | 12-04-2014 |
20150019823 | METHOD AND APPARATUS RELATED TO CACHE MEMORY - A cache includes a cache array and a cache controller. The cache array has a plurality of entries. The cache controller is coupled to the cache array. The cache controller evicts entries from the cache array according to a cache replacement policy. The cache controller evicts a first cache line from the cache array by generating a writeback request for modified data from the first cache line, and subsequently generates a writeback request for modified data from a second cache line if the second cache line is about to satisfy the cache replacement policy and stores data from a common locality as the first cache line. | 01-15-2015 |
20150039836 | METHODS AND APPARATUS RELATED TO DATA PROCESSORS AND CACHES INCORPORATED IN DATA PROCESSORS - A cache includes a cache array and a cache controller. The cache array has a multiple number of entries. The cache controller is coupled to the cache array, for storing new entries in the cache array in response to accesses by a data processor, and evicts entries from the cache array according to a cache replacement policy. The cache controller includes a frequent writes predictor for storing frequency information indicating a write back frequency for the multiple number of entries. The cache controller selects a candidate entry for eviction based on both recency information and the frequency information. | 02-05-2015 |
20150039837 | SYSTEM AND METHOD FOR TIERED CACHING AND STORAGE ALLOCATION - Method for data placement in a tiered caching system and/or tiered storage system includes: determining a first period of time between each access to a first data, in a predetermined time window; averaging the first periods of time between each access to obtain an average first period of time; determining a second period of time between each access to a second data, in said predetermined time window; averaging the second periods of time between each access to obtain an average second period of time; comparing the average first period of time and the average second period of time; placing the first data in a fast-access storage medium, when the average first period of time is less than the average second period of time; and placing the second data in the fast-access storage medium, when the average second period of time is less than the average first period of time. | 02-05-2015 |
20150052313 | PROTECTING THE FOOTPRINT OF MEMORY TRANSACTIONS FROM VICTIMIZATION - A processing unit includes a processor core and a cache memory. Entries in the cache memory are grouped in multiple congruence classes. The cache memory includes tracking logic that tracks a transaction footprint including cache line(s) accessed by transactional memory access request(s) of a memory transaction. The cache memory, responsive to receiving a memory access request that specifies a target cache line having a target address that maps to a congruence class, forms a working set of ways in the congruence class containing cache line(s) within the transaction footprint and updates a replacement order of the cache lines in the congruence class. Based on membership of the at least one cache line in the working set, the update promotes at least one cache line that is not the target cache line to a replacement order position in which the at least one cache line is less likely to be replaced. | 02-19-2015 |
20150052314 | CACHE MEMORY CONTROL PROGRAM, PROCESSOR INCORPORATING CACHE MEMORY, AND CACHE MEMORY CONTROL METHOD - A cache memory control procedure has: cache area allocating including allocating, in response to an acquisition request, and according to an effective cache usage degree that is based on a memory access frequency and a difference between a cache hit rate in a case where the dedicated cache area is allocated and a cache hit rate in a case where a shared cache area in the cache memory is allocated, the dedicated cache area for a higher effective cache usage degree and the shared cache area for a lower effective cache usage degree; and releasing the dedicated cache area which is allocated, in response to a release request which is issued during execution of a process by the processor and requests the release of the allocated dedicated cache area. | 02-19-2015 |
20150058577 | COMPRESSED BLOCK MAP OF DENSELY-POPULATED DATA STRUCTURES - Embodiments of the disclosure provide techniques for creating a compressed mapping structure in a system of resources. For example, a distributed resources system may use delta encoding to store, in memory, numerous entries of dense data structures in the system. In a compressed block of such entries, the distributed resources system encodes the key of each entry as the delta from the key of the previous entry. The content of each entry is encoded similarly. The distributed resources system suppresses the leading zero bits of each resulting field. | 02-26-2015 |
20150067266 | EARLY WRITE-BACK OF MODIFIED DATA IN A CACHE MEMORY - A level of cache memory receives modified data from a higher level of cache memory. A set of cache lines with an index associated with the modified data is identified. The modified data is stored in the set in a cache line with an eviction priority that is at least as high as an eviction priority, before the modified data is stored, of an unmodified cache line with a highest eviction priority among unmodified cache lines in the set. | 03-05-2015 |
20150081981 | GENERATING PREDICTIVE CACHE STATISTICS FOR VARIOUS CACHE SIZES - Technology is disclosed for generating predictive cache statistics for various cache sizes. In some embodiments, a storage controller includes a cache tracking mechanism for concurrently generating the predictive cache statistics for various cache sizes for a cache system. The cache tracking mechanism can track simulated cache blocks of a cache system using segmented cache metadata while performing an exemplary workload including various read and write requests (client-initiated I/O operations) received from client systems (or clients). The segmented cache metadata corresponds to one or more of the various cache sizes for the cache system. | 03-19-2015 |
20150081982 | SHIELDING A MEMORY DEVICE - A method of shielding a memory device ( | 03-19-2015 |
20150089148 | MEMORY MANAGEMENT UNIT - A data processing apparatus is provided comprising a plurality of master devices configured to issue memory access requests including virtual addresses. A memory management unit is configured to receive memory access requests and to translate a virtual address included in a memory access request from a requesting master device into a physical address indicating a storage location in memory. The memory management unit has an internal storage unit having a plurality of entries wherein indications of corresponding virtual address portions and physical address portions are stored. The memory management unit is configured to select an entry of the internal storage unit in dependence on the virtual address and an identifier of the requesting master device. Conflict between the master devices in their usage of the internal storage unit is thus avoided. | 03-26-2015 |
20150095586 | STORING NON-TEMPORAL CACHE DATA - Embodiments herein provide for using one or more cache memory to facilitate non-temporal transaction. A request to store data into a cache associated with a processor is received. In response to receiving the request, a determination is made as to whether the data to be stored is non-temporal data. A predetermined location of the cache is selected; the location to which storing of the non-temporal data is restricted to a predetermined location, in response to determining the data to be stored is non-temporal data. The non-temporal data is data that is not accessed within a predetermined period of time. The non-temporal data is stored into the predetermined location. | 04-02-2015 |
20150095587 | REMOVING CACHED DATA - Embodiments of the present invention provide a method and apparatus for removing cached data. The method comprises determining activeness of a plurality of divided lists; ranking the plurality of divided lists according to the determined activeness of the plurality of divided lists. The method comprises removing a predetermined amount of cached data from the plurality of divided lists according to the ranking result when the used capacity in the cache area reaches a predetermined threshold. Through embodiments of the present invention, the activeness of each divided list may be used to wholly measure the heat of access to the cached data included by each divided list, and upon removal, the cached data with lower heat of access in the whole system can be removed and the cached data with higher heat of access in the whole system can be retained so as to improve the read/write rate of the system. | 04-02-2015 |
20150113225 | MANAGEMENT OF FILE CACHE - A method and computer program product for managing a file cache with a filesystem cache manager is disclosed. The method may include installing the filesystem cache manager for the file cache by a mount command. The filesystem cache manager may include a specified time interval and a first cache elimination instruction. The method may further include starting a first timer upon the installation of the filesystem cache manager. The method may further include running the first cache elimination instruction when the first timer reaches the specified time interval. | 04-23-2015 |
20150113226 | MANAGEMENT OF FILE CACHE - A method and computer program product for managing a file cache with a filesystem cache manager is disclosed. The method may include installing the filesystem cache manager for the file cache by a mount command. The filesystem cache manager may include a specified time interval and a first cache elimination instruction. The method may further include starting a first timer upon the installation of the filesystem cache manager. The method may further include running the first cache elimination instruction when the first timer reaches the specified time interval. | 04-23-2015 |
20150113227 | METHOD, SYSTEM AND SERVER OF REMOVING A DISTRIBUTED CACHING OBJECT - The present disclosure discloses a method, a system and a server of removing a distributed caching object. In one embodiment, the method receives a removal request, where the removal request includes an identifier of an object. The method may further apply consistent Hashing to the identifier of the object to obtain a Hash result value of the identifier, locates a corresponding cache server based on the Hash result value and renders the corresponding cache server to be a present cache server. In some embodiments, the method determines whether the present cache server is in an active status and has an active period greater than an expiration period associated with the object. Additionally, in response to determining that the present cache server is in an active status and has an active period greater than the expiration period associated with the object, the method removes the object from the present cache server. By comparing an active period of a located cache server with an expiration period associated with an object, the exemplary embodiments precisely locate a cache server that includes the object to be removed and perform a removal operation, thus saving the other cache servers from wasting resources to perform removal operations and hence improving the overall performance of the distributed cache system. | 04-23-2015 |
20150134914 | DESTAGE GROUPING FOR SEQUENTIAL FAST WRITE TRACKS - An amount of sequential fast write (SFW) Tracks are metered by providing an adjustable threshold for performing a destage scan that moves the SFW tracks from a SFW least recently used (LRU) list to a destaging wait list (DWL). Priorities are set for the destaging of the SFW tracks from the DWL. | 05-14-2015 |
20150134915 | EFFICIENT CACHING SYSTEM - Cluster data is generated based on a history of storage operations. The cluster data may include an address range and an access history. The access history may comprise a bit pattern that represents a history of storage operations associated with a cluster. A prefix or counter may identify the number of storage operations identified in the bit pattern. The bit pattern and/or address range may be updated to reflect new storage operations associated with the cluster. The bit pattern then may determine when to cache data in a cache memory. The bit pattern tracks a large number of storage operations in a relatively small amount of memory enabling quick effective caching decisions. | 05-14-2015 |
20150143056 | DYNAMIC WRITE PRIORITY BASED ON VIRTUAL WRITE QUEUE HIGH WATER MARK - A set associative cache is managed by a memory controller which places writeback instructions for modified (dirty) cache lines into a virtual write queue, determines when the number of the sets containing a modified cache line is greater than a high water mark, and elevates a priority of the writeback instructions over read operations. The controller can return the priority to normal when the number of modified sets is less than a low water mark. In an embodiment wherein the system memory device includes rank groups, the congruence classes can be mapped based on the rank groups. The number of writes pending in a rank group exceeding a different threshold can additionally be a requirement to trigger elevation of writeback priority. A dirty vector can be used to provide an indication that corresponding sets contain a modified cache line, particularly in least-recently used segments of the corresponding sets. | 05-21-2015 |
20150149729 | CACHE MIGRATION - Exemplary methods, apparatuses, and systems determine that a cache is to be migrated from a first storage device to a second storage device. The cache includes cache entries organized in a first list of cache entries and a second list of cache entries. Only a portion of all cache entries from the first and second lists is selected for migration to the second storage device. The selected cache entries and metadata for cache entries from the first or second list that were not selected are migrated from the first storage device to the second storage device. | 05-28-2015 |
20150149730 | CACHE MIGRATION - Exemplary methods, apparatuses, and systems determine that a cache is to be migrated from a first storage device to a second storage device. Each cache entry within the cache includes a first indicator to indicate whether or not the cache entry has long-term utility. Only a portion of all cache entries are selected to be migrated and the portion is selected from cache entries with the first indicator set to indicate long-term utility. The selected cache entries and metadata for cache entries that were not selected are migrated from the first storage device to the second storage device. | 05-28-2015 |
20150149731 | I/O CONTROLLER AND METHOD FOR OPERATING AN I/O CONTROLLER - An I/O controller, coupled to a processing unit and to a memory, includes an I/O link interface configured to receive data packets having virtual addresses; an address translation unit having an address translator to translate received virtual addresses into real addresses by translation control entries and a cache allocated to the address translator to cache a number of the translation control entries; an I/O packet processing unit for checking the data packets received at the I/O link interface and for forwarding the checked data packets to the address translation unit; and a prefetcher to forward address translation prefetch information from a data packet received to the address translation unit; the address translator configured to fetch the translation control entry for the data packet by the address translation prefetch information from the allocated cache or, if the translation control entry is not available in the allocated cache, from the memory. | 05-28-2015 |
20150309944 | METHODS FOR CACHE LINE EVICTION - A method and apparatus for evicting cache lines from a cache memory includes receiving a request from one of a plurality of processors. The cache memory is configured to store a plurality of cache lines, and a given cache line includes an identifier indicating a processor that performed a most recent access of the given cache line. The method further includes selecting a cache line for eviction from a group of least recently used cache lines, where each cache line of the group of least recently used cache lines occupy a priority position less that a predetermined value, and then evicting the selected cache line. | 10-29-2015 |
20150324295 | Temporal Tracking of Cache Data - A data storage system with a cache organizes cache windows into lists based on the number of cache lines accessed during input/output operations. The lists are maintained in temporal queues with cache windows transferred from prior temporal queues to a current temporal queue. Cache windows are removed from the oldest temporal queue and least accessed cache window list whenever cached data needs to be removed for new hot data. | 11-12-2015 |
20150324300 | System and Methods for Efficient I/O Processing Using Multi-Level Out-Of-Band Hinting - A storage subsystem can achieve more efficient I/O processing by enabling users to specify and pass out of band I/O hints comprising an object to be hinted, a hint type, and caching strategies associated with a hint type. A hinted object may be either a virtual device or a file. In addition to priority cache, hint types may include never-cache, sticky-cache, and volatile-cache. Hints may be passed via command-line or graphical-user interfaces. | 11-12-2015 |
20150331802 | METHODS OF CACHE PRELOADING ON A PARTITION OR A CONTEXT SWITCH - A scheme referred to as a “Region-based cache restoration prefetcher” (RECAP) is employed for cache preloading on a partition or a context switch. The RECAP exploits spatial locality to provide a bandwidth-efficient prefetcher to reduce the “cold” cache effect caused by multiprogrammed virtualization. The RECAP groups cache blocks into coarse-grain regions of memory, and predicts which regions contain useful blocks that should be prefetched the next time the current virtual machine executes. Based on these predictions, and using a simple compression technique that also exploits spatial locality, the RECAP provides a robust prefetcher that improves performance without excessive bandwidth overhead or slowdown. | 11-19-2015 |
20150347316 | PROCESS FOR MANAGING THE STORAGE OF A LIST OF N ITEMS IN A MEMORY CACHE OF C ITEMS OF A CACHE SYSTEM - Process for managing the storage of a list (L) of N items (I[i]) in a memory cache (M) of C items (I[i]) of said list, said N items being ordered in said list according to a rank i which depends of their last request time by a user, C, N and i being strictly positive integers, said process providing, upon the reception of a request for an item (I[i]), for calculating a popularity probability f(i) for said requested item, f being an acceleration function, and for deciding to move or not said requested item at a higher rank i according to said popularity probability. | 12-03-2015 |
20160026579 | Storage Controller and Method for Managing Metadata Operations in a Cache - A cache controller having a cache supported by a non-volatile memory element manages metadata operations by defining a mathematical relationship between a cache line in a data store exposed to a host system and a location identifier associated with an instance of the cache line in the non-volatile memory. The cache controller maintains most recently used bit maps identifying data in the cache, as well as a data characteristic bit map identifying data that has changed since it was added to the cache. The cache controller maintains a most recently used bit map to replace the recently map at an appropriate time and a fresh bitmap tracks the most recently used bit map. The cache controller uses a collision bitmap, an imposter index and a quotient to modify cache lines stored in the non-volatile memory element. | 01-28-2016 |
20160041925 | BALANCED CACHE FOR RECENTLY FREQUENTLY USED DATA - The disclosure of the present invention presents a method and system for efficiently maintaining an object cache to a maximum size by number of entries, whilst providing a means of automatically removing cache entries when the cache attempts to grow beyond its maximum size. The method for choosing which entries should be removed provides for a balance between least recently used and least frequently used policies. A flush operation is invoked only when the cache size grows beyond the maximum size and removes a fixed percentage of entries in one pass. | 02-11-2016 |
20160055003 | BRANCH PREDICTION USING LEAST-RECENTLY-USED (LRU)-CLASS LINKED LIST BRANCH PREDICTORS, AND RELATED CIRCUITS, METHODS, AND COMPUTER-READABLE MEDIA - Branch prediction using Least-Recently-Used (LRU)-class linked list branch predictors, and related circuits, methods, and computer-readable media are disclosed. In one aspect, a branch predictor circuit comprises branch direction prediction logic and a linked list comprising a plurality of predictor entries, each comprising a link address register. The branch predictor circuit also comprises a LRU indicator indicative of a relative age of each of the predictor entries. The branch predictor circuit is configured to detect a first branch instruction in an instruction stream, and determine whether the first branch instruction is predicted to be taken. Responsive to determining that the first branch instruction is predicted to be taken, the branch predictor circuit allocates a least-recently-used entry of the plurality of predictor entries of the linked list based on the LRU indicator, and stores a sequential address for the first branch instruction in the link address register of the least-recently-used predictor entry. | 02-25-2016 |
20160055099 | Least Recently Used Mechanism for Cache Line Eviction from a Cache Memory - A mechanism for evicting a cache line from a cache memory includes first selecting for eviction a least recently used cache line of a group of invalid cache lines. If all cache lines are valid, selecting for eviction a least recently used cache line of a group of cache lines in which no cache line of the group of cache lines is also stored within a higher level cache memory such as the L1 cache, for example. Lastly, if all cache lines are valid and there are no non-inclusive cache lines, selecting for eviction the least recently used cache line stored in the cache memory. | 02-25-2016 |
20160062791 | THREAD-BASED CACHE CONTENT SAVING FOR TASK SWITCHING - Embodiments relate to thread-based cache content savings for task switching in a computer processor. An aspect includes determining a cache entry in a cache of the computer processor that is owned by the first thread, wherein the determination is made based on a hardware thread identifier (ID) of the first thread matching a hardware thread ID in the cache entry. Another aspect includes determining whether the determined cache entry is eligible for prefetching. Yet another aspect includes, based on determining that the determined cache entry is eligible for prefetching, setting a marker in the cache entry to active. | 03-03-2016 |
20160085691 | REGULATING MEMORY ACTIVATION RATES - A technique includes monitoring activation rates of a plurality of memory locations associated with a plurality of memory addresses and regulating the activation rates. The regulating includes selectively updating a cache with the memory addresses based on the activation rates. | 03-24-2016 |
20160140053 | RE-MRU OF METADATA TRACKS TO REDUCE LOCK CONTENTION - For reducing lock contention on a Modified Least Recently Used (MLRU) list for metadata tracks, upon a conclusion of an access of a metadata track, if one of the metadata track is located in a predefined lower percentile of the MLRU list, and the metadata track has been accessed, including the access, a predetermined number of times, the metadata track is removed from a current position in the MLRU list and moved to a Most Recently Used (MRU) end of the MLRU list. | 05-19-2016 |
20160378355 | METHOD AND SYSTEM FOR IMPLEMENTING PERFORMANCE TIER DE-DUPLICATION IN A VIRTUALIZATION ENVIRONMENT - The present application provides an improved approach for managing performance tier de-duplication in a virtualization environment. A content cache is implemented on high performance tiers of storage in order to maintain a working set for the user virtual machines accessing the system, and associates fingerprints with data stored therein. During write requests from the user virtual machines, fingerprints are calculated for the data to be written. However, no de-duplication is performed during the write. During read requests, fingerprints corresponding to the data to be read are retrieved and matched with the fingerprints associated with the data in the content cache. Thus, while multiple pieces of data having the same fingerprints may be written to the lower performance tiers of storage, only one of those pieces of data having that fingerprint will be stored in the content cache for fulfilling read requests. | 12-29-2016 |