Entries |
Document | Title | Date |
20080209130 | Translation Data Prefetch in an IOMMU - In an embodiment, a system memory stores a set of input/output (I/O) translation tables. One or more I/O devices initiate direct memory access (DMA) requests including virtual addresses. An I/O memory management unit (IOMMU) is coupled to the I/O devices and the system memory, wherein the IOMMU is configured to translate the virtual addresses in the DMA requests to physical addresses to access the system memory according to an I/O translation mechanism implemented by the IOMMU. The IOMMU comprises one or more caches, and is configured to read translation data from the I/O translation tables responsive to a prefetch command that specifies a first virtual address. The reads are responsive to the first virtual address and the I/O translation mechanism, and the IOMMU is configured to store data in the caches responsive to the read translation data. | 08-28-2008 |
20080209131 | STRUCTURES, SYSTEMS AND ARRANGEMENTS FOR CACHE MANAGEMENT - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is provided. The design structure generally includes a processing system. The process system generally includes a processor, cache coupled to the processor to provide at least one line of binary storage to the processor module, an eviction management module coupled to the processor to monitor lines of code interacting with the cache and to count storage related occurrences of the lines of code with respect to the cache, the lines of code having an identifier, and a cache directory to store the count and the identifier, wherein if processor requests cache capacity, the cache directory provides eviction related data for a line of code stored in the cache to the processor. | 08-28-2008 |
20080244185 | Reduction of cache flush time using a dirty line limiter - The invention relates to a method for reducing cache flush time of a cache in a computer system. The method includes populating at least one of a plurality of directory entries of a dirty line directory based on modification of the cache to form at least one populated directory entry, and de-populating a pre-determined number of the plurality of directory entries according to a dirty line limiter protocol causing a write-back from the cache to a main memory, where the dirty line limiter protocol is based on a number of the at least one populated directory entry exceeding a pre-defined limit. | 10-02-2008 |
20080244186 | WRITE FILTER CACHE METHOD AND APPARATUS FOR PROTECTING THE MICROPROCESSOR CORE FROM SOFT ERRORS - A write filter cache system for protecting a microprocessor core from soft errors and method thereof are provided. In one aspect, data coming from a processor core to be written in primary cache memory, for instance, L1 cache memory system, is buffered in a write filter cache placed between the primary cache memory and the processor core. The data from the write filter is move to the main cache memory only if it is verified that main thread's data is soft error free, for instance, by comparing the main thread's data with that of its redundant thread. The main cache memory only keeps clean data associated with accepted checkpoints. | 10-02-2008 |
20080301374 | STRUCTURE FOR DYNAMIC LIVELOCK RESOLUTION WITH VARIABLE DELAY MEMORY ACCESS QUEUE - A design structure for resolving the occurrence of livelock at the interface between the processor core and memory subsystem controller. Livelock is resolved by introducing a livelock detection mechanism (which includes livelock detection utility or logic) within the processor to detect a livelock condition and dynamically change the duration of the delay stage(s) in order to alter the “harmonic” fixed-cycle loop behavior. The livelock detection logic (LDL) counts the number of flushes a particular instruction takes or the number of times an instruction re-issues without completing. The LDL then compares that number to a preset threshold number. Based on the result of the comparison, the LDL triggers the implementation of one of two different livelock resolution processes. These processes include dynamically configuring the delay queue within the processor into one of two different configurations and changing the sequence and timing of handling memory access instructions, based on the specific configuration of the delay queue. | 12-04-2008 |
20080307164 | Method And System For Memory Block Flushing - A method and system for flushing physical memory blocks in a memory device is disclosed. The method includes detecting a quantity of available memory, background flushing partially obsolete memory blocks if the quantity decreases to a background activation threshold, disabling the background flushing if the quantity increases to a background deactivation threshold, foreground flushing the partially obsolete memory blocks if the quantity decreases to a foreground activation threshold, and disabling the foreground flushing if the quantity increases to a foreground deactivation threshold. The thresholds may be adaptively defined. The background flushing may occur when the host interface is idle. The foreground flushing may interleave writing operations with flushing operations while a write command is unfinished. The system includes a memory for receiving data with a host write command, and a controller for detecting a quantity of available memory and enabling and disabling background and foreground flushing depending on adaptive thresholds. | 12-11-2008 |
20080307165 | INFORMATION PROCESSOR, METHOD FOR CONTROLLING CACHE FLASH, AND INFORMATION PROCESSING CONTROLLER - An information processor, a method for controlling cache flush, and an information processing controller that increases the data processing speed by efficiently performing cache flushing on a cache memory. A CPU includes a load/store unit and a flush control unit. The CPU controls data stored in a cache through a cache controller. When detecting an “.f” signal, the flush control unit waits until a single cache line is accessed. When determining that a single cache line has been accessed, the flush control unit issues a cache flush instruction to a cache controller. | 12-11-2008 |
20090019228 | Data Cache Invalidate with Data Dependent Expiration Using a Step Value - According to embodiments of the invention, a step value and a step-interval cache coherency protocol may be used to update and invalidate data stored within cache memory. A step value may be an integer value and may be stored within a cache directory entry associated with data in the memory cache. Upon reception of a cache read request, along with the normal address comparison to determine if the data is located within the cache a current step value may be compared with the stored step value to determine if the data is current. If the step values match, the data may be current and a cache hit may occur. However, if the step values do not match, the requested data may be provided from another source. Furthermore, an application may update the current step value to invalidate old data stored within the cache and associated with a different step value. | 01-15-2009 |
20090031083 | Storage control unit with memory cash protection via recorded log - A “Logging” method and apparatus is provided to protect control unit cached data not yet written to backing storage disk drives. This recording mechanism will copy “WRITE DATA” to a log at a target logically or physically external (to the storage controllers) location equally common to all members of the set of distributed storage control units managing a common storage pool. Upon the failure of one the members of the set of control units, the “Log” information is available to insure that pending “write” data is written to the proper location on the disk drives upon a recovery action. One of the surviving members of the set assumes control of the storage managed by the failing unit by utilizing the recorded information to insure that data not written to backing storage (disks) up to the point of failure is then written to the disk backing storage. The surviving member of the set recovering the failing control unit storage (disk set) ownership will thereby “flush” (WRITE) the Journaled WRITE DATA to the backing storage disk drives before allowing normal operations to proceed. | 01-29-2009 |
20090049249 | Transparent cache system and method - A transparent caching system and a method for transparent caching are provided. The system includes a cache for storing, a processor for executing instructions of the cache, and clone handlers that provide a copy of a cached object. A cache key, corresponding uniquely to the cached object, is configured to identify and lookup the cached object. A pluggable expiration handler is configured to authorize the transparent caching system to clean up the cached object, and a cache object helper determines whether information in the cached object is still valid. If a cache hit is received to retrieve the cached object corresponding to the cache key, a copy of the cached object is provided. To determine if the cached object is to be cleaned up, the expiration handler takes into account at least one of a cache hit count, a time since a last cache hit, and an available memory. | 02-19-2009 |
20090089508 | METHOD FOR REDUCING NUMBER OF WRITES IN A CACHE MEMORY - Disclosed is a method for reducing number of writes in a write-back non-volatile cache memory. The method comprises: writing a plurality of data in the cache memory, wherein cache lines meta data for each of the plurality of data is marked as dirty; determining a set of data of the plurality of the data in the cache memory to be flushed to a hard disk, wherein the hard disk is operatively coupled to the cache memory; flushing the set of data of the plurality of data to the hard disk from the cache memory; and writing a clean-marker to the cache memory specifying which of the plurality of the data has been flushed to the disk. | 04-02-2009 |
20090100230 | System and method to protect data stored in a storage system - In an example of an embodiment of the invention, a system for recording data generated by a client server and transmitted to a storage system is provided. The system comprises a storage system and a processor located remotely from the storage system and linked to the storage system via a network. The processor determines that a selected data processing operation is to be performed with respect to data stored in the storage system, and determines that a record of at least some of the data stored in the storage system is required prior to performing the selected data processing operation. The processor also generates a command comprising a request to generate a record of the at least some of the stored data, the command being generated in accordance with a well-known standard, and transmits the command to the storage system to generate the record, via the network in accordance with Internet Protocol (IP). Examples of other systems and methods are also disclosed. | 04-16-2009 |
20090113135 | MECHANISM FOR DATA CACHE REPLACEMENT BASED ON REGION POLICIES - A system and method for cache replacement includes: augmenting each cache block in a cache region with a region hint indicating a temporal priority of the cache block; receiving an indication that a cache miss has occurred; and selecting for eviction the cache block comprising the region hint indicating a low temporal priority. | 04-30-2009 |
20090113136 | CACHING FOR STRUCTURAL INTEGRITY SCHEMES - A method for data integrity protection includes storing items of data in a plurality of data blocks in a storage medium. Respective block signatures are stored in an integrity structure in the storage medium. A block signature of the given data block is computed in response to a first request to read a first data item from a given data block, and the computed signature is verified against a stored signature read from the integrity structure. The verified block signature is saved in a secure cache. The block signature is recomputed upon receiving a second request to read a second data item, subsequent to the first request, and is verified against the verified block signature in the secure cache. The data item is output from the storage medium in response to verifying the recomputed block signature. | 04-30-2009 |
20090157974 | System And Method For Clearing Data From A Cache - A system and method for clearing data from a cache is disclosed. The method may include the steps of receiving data at a cache of a self-caching storage device, determining a cost-effectiveness of flushing a logical block from the cache and, if the current available capacity of the cache is greater than a minimum capacity parameter, only flushing the logical block if a predetermined criteria is met, regardless of whether the storage device is idle. The system may include a cache storage, a main storage and a controller configured to only flush a logical block from the cache if a determined cost effectiveness meets a predetermined criteria when the current available capacity of the cache is greater than a minimum capacity parameter. | 06-18-2009 |
20090172292 | ACCELERATING SOFTWARE LOOKUPS BY USING BUFFERED OR EPHEMERAL STORES - A method and apparatus for accelerating lookups in an address based table is herein described. When an address and value pair is added to an address based table, the value is privately stored in the address to allow for quick and efficient local access to the value. In response to the private store, a cache line holding the value is transitioned to a private state, to ensure the value is not made globally visible. Upon eviction of the privately held cache line, the information is not written-back to ensure locality of the value. In one embodiment, the address based table includes a transactional write buffer to hold addresses, which correspond to tentatively updated values during a transaction. Accesses to the tentative values during the transaction may be accelerated through use of annotation bits and private stores as discussed herein. Upon commit of the transaction, the values are copied to the location to make the updates globally visible. | 07-02-2009 |
20090198902 | MEMORY MAPPING TECHNIQUES - Memory mapping techniques for non-volatile memory are disclosed where logical sectors are mapped into physical pages using data structures in volatile and non-volatile memory. In some implementations, a first lookup table in non-volatile memory maps logical sectors directly into physical pages. A second lookup table in volatile memory holds the physical address of the first lookup table in non-volatile memory. In some implementations, a cache in volatile memory holds the physical addresses of the most recently written logical sectors. Also disclosed is a block TOC describing block content which can be used for garbage collection and restore operations. | 08-06-2009 |
20090210629 | METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR SELECTIVELY PURGING CACHE ENTRIES - A method, system and computer program product for selectively purging entries in a cache of a computer system. The method includes determining a starting storage address and a length of the storage address range to be purged, determining preset values for a congruence class and a compartment of a cache directory, accessing the cache directory based on the preset value of the congruence class, and selecting an entry in the cache directory based on the preset value of the compartment, determining validity of the entry accessed by examining an ownership tag of the entry, comparing a line address of the entry with the starting storage address and a sum of the starting storage address and the length of the storage address range, and selectively purging the entry based on the comparison result. | 08-20-2009 |
20090222627 | METHOD AND APPARATUS FOR HIGH SPEED CACHE FLUSHING IN A NON-VOLATILE MEMORY - An invention is provided for performing flush cache in a non-volatile memory. The invention includes maintaining a plurality of free memory blocks within a non-volatile memory. When a flush cache command is issued, a flush cache map is examined to obtain a memory address of a memory block in the plurality of free memory blocks within the non-volatile memory. The flush cache map includes a plurality of entries, each entry indicating a memory block of the plurality of free memory blocks. Then, a cache block is written to a memory block at the obtained memory address within the non-volatile memory. In this manner, when a flush cache command is received, the flush cache map allows cache blocks to be written to free memory blocks in the non-volatile memory without requiring a non-volatile memory search for free blocks or requiring erasing of memory blocks storing old data. | 09-03-2009 |
20090222628 | MEMORY SYSTEM - A controller determines whether data stored in a first storing area should be flushed to a second storing area or a third storing area. When flushing of data in a track unit from at least one of the first storing area and the second storing area unit to the third storing area unit is determined, the controller collects data included in the flushed data in the track unit from at least one of the first storing area and the second storing area including the storing area from which the flushing of the data is determined, merges the flushed data and the collected data, and writes the merged data in the third storing area. | 09-03-2009 |
20090248987 | Memory System and Data Storing Method Thereof - A memory system includes a memory device having a cache area and a main area, and a memory controller configured to control the memory device, wherein the memory controller is configured to dump file data into the cache area in response to a flush cache command. | 10-01-2009 |
20090300290 | Memory Metadata Used to Handle Memory Errors Without Process Termination - Embodiments of the invention provide an interrupt handler configured to distinguish between critical and non-critical unrecoverable memory errors, yielding different actions for each. Doing so may allow a system to recover from certain memory errors without having to terminate a running process. In addition, when an operating system critical task experiences an unrecoverable error, such a task may be acting on behalf of a non-critical process (e.g., when swapping out a virtual memory page). When this occurs, an interrupt handler may respond to a memory error with the same response that would result had the process itself performed the memory operation. Further, firmware may be configured to perform diagnostics to identify potential memory errors and alert the operating system before a memory region state change occurs, such that the memory error would become critical. | 12-03-2009 |
20090307432 | Memory management arrangements - A method of operating a virtual memory is disclosed. The method can include detecting the existence of a central memory loan pool, determining a segment of memory that is loanable, and transmitting an indicator that the segment is available for loaning to the memory loan pool. The operating system contributing memory can monitor its' actual memory capacity and reclaim the loaned segment if the amount of memory available to the loaning operating system (OS) gets below a predetermined value. Other embodiments are also disclosed. | 12-10-2009 |
20100011168 | METHOD AND APPARATUS FOR CACHE FLUSH CONTROL AND WRITE RE-ORDERING IN A DATA STORAGE SYSTEM - Methods and apparatus for cache flush control and write re-ordering in a data storage system are provided. A cache flush control method includes cache flushing information stored in a cache memory to a first storage apparatus of a plurality of storage apparatuses included in a data storage system when a cache flush condition is generated, and performing a write command in a second storage apparatus of the plurality of storage apparatuses which has a write speed lower than the first storage apparatus according to information stored in the first storage apparatus processed with the cache flush. | 01-14-2010 |
20100023700 | Dynamically Maintaining Coherency Within Live Ranges of Direct Buffers - Reducing coherency problems in a data processing system is provided. Source code that is to be compiled is received and analyzed to identify at least one of a plurality of loops that contain a memory reference. A determination is made as to whether the memory reference is an access to a global memory that should be handled by a direct buffer. Responsive to an indication that the memory reference is an access to the global memory that should be handled by the direct buffer, the memory reference is marked for direct buffer transformation. The direct buffer transformation is then applied to the memory reference. | 01-28-2010 |
20100115205 | SYSTEM, METHOD, AND COMPUTER-READABLE MEDIUM FOR SPOOL CACHE MANAGEMENT - A system, method, and computer-readable medium that facilitate efficient use of cache memory in a massively parallel processing system are provided. A residency time of a data block to be stored in cache memory or a disk drive is estimated. A metric is calculated for the data block as a function of the residency time. The metric may further be calculated as a function of the data block size. One or more data blocks stored in cache memory are evaluated by comparing a respective metric of the one or more data blocks with the metric of the data block to be stored. A determination is then made to either store the data block on the disk drive or flush the one or more data blocks from the cache memory and store the data block in the cache memory. In this manner, the cache memory may be more efficiently utilized by storing smaller data blocks with lesser residency times by flushing larger data blocks with significant residency times from the cache memory. The disclosed cache management mechanisms are effective for many workloads and are adaptable to various database usage scenarios without requiring detailed studies of the particular data demographics and workload. | 05-06-2010 |
20100174869 | MAPPING ADDRESS TABLE MAINTENANCE IN A MEMORY DEVICE - A method and system maintains an address table for mapping logical groups to physical addresses in a memory device. The method includes receiving a request to set an entry in the address table and selecting and flushing entries in an address table cache depending on the existence of the entry in the cache and whether the cache meets a flushing threshold criteria. The flushed entries include less than the maximum capacity of the address table cache. The flushing threshold criteria includes whether the address table cache is full or if a page exceeds a threshold of changed entries. The address table and/or the address table cache may be stored in a non-volatile memory and/or a random access memory. Improved performance may result using this method and system due to the reduced number of write operations and time needed to partially flush the address table cache to the address table. | 07-08-2010 |
20100180084 | CACHE-COHERENCY PROTOCOL WITH HELD STATE - A new “held” (“H”) cache-coherency state is introduced for directory-based multiprocessor systems. Using the held state enables embodiments of the present invention to track sharers that have a shared copy of a cache line after a directory runs out of space for holding information that identifies processors that have received shared copies of the cache line (e.g., pointers to sharers of the cache line). In these embodiments, when a directory entry is full, the system provides subsequent shared copies of the cache line to sharers in the held state and tracks the identity of the held-copy owners in a data field in the entry for the cache line in a home node. | 07-15-2010 |
20100185819 | INTELLIGENT CACHE INJECTION - A first cache simultaneously broadcasts, in a single message, a request for a cache line and a request to accept a future related evicted cache line to multiple other caches. Each of the multiple other caches evaluate their occupancy to derive an occupancy value that reflects their ability to accept the future related evicted cache line. In response to receiving a requested cache line, the first cache evicts the related evicted cache line to the cache with the highest occupancy value. | 07-22-2010 |
20100185820 | PROCESSOR POWER MANAGEMENT AND METHOD - A data processing device is disclosed that includes multiple processing cores, where each core is associated with a corresponding cache. When a processing core is placed into a first sleep mode, the data processing device initiates a first phase. If any cache probes are received at the processing core during the first phase, the cache probes are serviced. At the end of the first phase, the cache corresponding to the processing core is flushed, and subsequent cache probes are not serviced at the cache. Because it does not service the subsequent cache probes, the processing core can therefore enter another sleep mode, allowing the data processing device to conserve additional power. | 07-22-2010 |
20100191917 | Administering Registered Virtual Addresses In A Hybrid Computing Environment Including Maintaining A Watch List Of Currently Registered Virtual Addresses By An Operating System - Administering registered virtual addresses in a hybrid computing environment that includes a host computer and an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining, by an operating system, a watch list of ranges of currently registered virtual addresses; upon a change in physical to virtual address mappings of a particular range of virtual addresses falling within the ranges included in the watch list, notifying the system level message passing module by the operating system of the change; and updating, by the system level message passing module, a cache of ranges of currently registered virtual addresses to reflect the change in physical to virtual address mappings. | 07-29-2010 |
20100228921 | CACHE HIT MANAGEMENT - A system and method for cache hit management. | 09-09-2010 |
20100228922 | METHOD AND SYSTEM TO PERFORM BACKGROUND EVICTIONS OF CACHE MEMORY LINES - A method and system to provide a method and system to perform background evictions of cache memory lines. In one embodiment of the invention, when a processor of a system determines that the occupancy rate of its bus interface is between a low and a high threshold, the processor performs evictions of cache memory lines that are dirty. In another embodiment of the invention, the processor performs evictions of the dirty cache memory lines when a timer between each periodic clock interrupt of an operating system has expired. By performing background evictions of dirty cache memory lines, the number of dirty cache memory lines required to be evicted before the processor changes its state from a high power state to a low power state is reduced. | 09-09-2010 |
20100235584 | Lateral Castout (LCO) Of Victim Cache Line In Data-Invalid State - A victim cache line having a data-invalid coherence state is selected for castout from a first lower level cache of a first processing unit. The first processing unit issues on an interconnect fabric a lateral castout (LCO) command identifying the victim cache line to be castout from the first lower level cache, indicating the data-invalid coherence state, and indicating that a lower level cache is an intended destination of the victim cache line. In response to a coherence response to the LCO command indicating success of the LCO command, the victim cache line is removed from the first lower level cache and held in a second lower level cache of a second processing unit in the data-invalid coherence state. | 09-16-2010 |
20100312971 | Dynamic Operating Point Modification in an Integrated Circuit - In one embodiment, an integrated circuit includes a processor, an internal memory, and a memory controller coupled to an external memory. The integrated circuit may support two or more modes of operation, with different operating points. To switch from one operating point to another, code executed by the processor may copy switch code from the external memory into the internal memory, and may jump to the switch code. Executing out of the internal memory, the switch code may communicate with the memory controller to cause the external memory to enter into self-refresh mode. The operating point may be altered, and the switch code may reinitialize the memory controller after the integrated circuit has stabilized at the new operating point. After the memory controller's physical interface circuit has relocked, the external memory may exit self-refresh mode. | 12-09-2010 |
20100325363 | HIERARCHICAL OBJECT CACHING BASED ON OBJECT VERSION - A method and system for hierarchical caching of objects of an ERP system is provided. A caching system comprises a server cache component that is executed by a server and a client cache component that is executed by each client of the server. The server cache component maintains a server cache at the server, and the client cache component maintains a client cache at each client. The client cache components also cache the objects in local client caches. Upon opening an object, the client cache component checks its local client cache to determine whether the object is cached. If so, then the client cache component need not retrieve the object from the server. Thus, the caching system is hierarchical in that each server and client maintains its own cache. | 12-23-2010 |
20100325364 | CACHE CONTROLLER, METHOD FOR CONTROLLING THE CACHE CONTROLLER, AND COMPUTING SYSTEM COMPRISING THE SAME - A cache controller, a method for controlling the cache controller, and a computing system comprising the same are provided. The computer system comprises a processor and a cache controller. The cache controller is electrically connected to the processor and comprises a first port, a second port, and at least one cache. The first port is configured to receive an address of a content, wherein a type of the content is one of instruction and data. The second port is configured to receive an information bit corresponding to the content, wherein the information bit indicates the type of the content. The at least one cache comprises at least one cache lines. Each of the cache lines comprises a content field and corresponding to an information field. The content and the information bit is stored in the content field of one of the cache lines and the corresponding information field respectively according to the information bit and the address. Thereby, instruction and data are separated in a unified cache. | 12-23-2010 |
20110029737 | EFFICIENTLY SYNCHRONIZING WITH SEPARATED DISK CACHES - In a method of synchronizing with a separated disk cache, the separated cache is configured to transfer cache data to a staging area of a storage device. An atomic commit operation is utilized to instruct the storage device to atomically commit the cache data to a mapping scheme of the storage device. | 02-03-2011 |
20110035553 | METHOD AND SYSTEM FOR CACHE MANAGEMENT - Systems and methods for managing cached content are disclosed. More particularly, embodiments disclosed herein may allow cached content to be updated (e.g. regenerated or replaced) in response to a notification. Specifically, embodiments disclosed herein may process a notification pertaining to content stored in a cache. Processing the notification may include locating cached content associated with the notification. After the cached content which corresponds to the notification is found, an appropriate action may be taken. For example, the cached content may be flushed from the cache or a request may be regenerated. As a result of the action, new content is generated. This new content is then used to replace or update the cached content. | 02-10-2011 |
20110035554 | Memory Management Methods and Systems - A method and an apparatus for determining a usage level of a memory device to notify a running application to perform memory reduction operations selected based on the memory usage level are described. An application calls APIs (Application Programming Interface) integrated with the application codes in the system to perform memory reduction operations. A memory usage level is determined according to a memory usage status received from the kernel of a system. A running application is associated with application priorities ranking multiple running applications statically or dynamically. Selecting memory reduction operations and notifying a running application are based on application priorities. Alternatively, a running application may determine a mode of operation to directly reduce memory usage in response to a notification for reducing memory usage without using API calls to other software. | 02-10-2011 |
20110082983 | CPU INSTRUCTION AND DATA CACHE CORRUPTION PREVENTION SYSTEM - Various exemplary embodiments relate to a cache corruption prevention system and a related method. A cache memory may contain contents that are susceptible to corruption. A cache controller, with the use of a threshold timer, may employ various operations to flush modified cache contents into a main memory and invalidate cache contents so that they are overwritten. Some operations include periodically flushing and invalidating the whole cache memory, periodically flushing and invalidating modified contents, and periodically flushing and invalidating contents based on the time saved in the cache memory. By overwriting cache contents that might otherwise be constantly stored in the cache memory, the system minimizes the probability of cache contents becoming corrupt. The periodic updating of the main memory may also increase the probability of successfully recovering from potential cache parity errors while still maintaining high performance associated with using a cache memory. | 04-07-2011 |
20110113202 | CACHE FLUSH BASED ON IDLE PREDICTION AND PROBE ACTIVITY LEVEL - A processing node tracks probe activity level associated with its cache. The processing node and/or processing system further predicts an idle duration. If the probe activity level increases above a threshold probe activity level, and the idle duration prediction is above a threshold idle duration threshold, the processing node flushes its cache to prevent probes to the cache. If the probe activity level is above the threshold probe activity level but the predicted idle duration is too short, the performance state of the processing node is increased above its current performance state to provide enhanced performance capability in responding to the probe requests. | 05-12-2011 |
20110153951 | GLOBAL INSTRUCTIONS FOR SPIRAL CACHE MANAGEMENT - A pipelined cache memory and a method of operation support global operations within the cache. The cache may be a spiral cache, with a move-to-front M2F network for moving values from a backing store to a front-most tile coupled to a processor or lower-order level of a memory hierarchy and a spiral push-back network for pushing out modified values to the backing-store. The cache controller manages application of global commands by propagating individual commands to the tiles. The global commands may provide zeroing, flushing and reconciling of the given tiles. Commands for interrupting and resuming interrupted global commands may be implemented, to reduce halting or slowing of processing while other global operations are in process. A line detector within each tile supports reconcile and flush operations, and a line patcher in the controller provides for initializing address ranges with no processor intervention. | 06-23-2011 |
20110153952 | SYSTEM, METHOD, AND APPARATUS FOR A CACHE FLUSH OF A RANGE OF PAGES AND TLB INVALIDATION OF A RANGE OF ENTRIES - Systems, methods, and apparatus for performing the flushing of a plurality of cache lines and/or the invalidation of a plurality of translation look-aside buffer (TLB) entries is described. In one such method, for flushing a plurality of cache lines of a processor a single instruction including a first field that indicates that the plurality of cache lines of the processor are to be flushed and in response to the single instruction, flushing the plurality of cache lines of the processor. | 06-23-2011 |
20110173395 | TEMPERATURE-AWARE BUFFERED CACHING FOR SOLID STATE STORAGE - A system and method for managing a cache includes monitoring a temperature of regions on a secondary storage based on a cumulative cost to access pages from each region of the secondary storage. Similar temperature pages are grouped in logical blocks. Data is written to a cache in a logical block granularity by overwriting cooler blocks with hotter blocks. | 07-14-2011 |
20110179228 | METHOD OF STORING LOGICAL DATA OBJECTS AND SYSTEM THEREOF - Various embodiments for storing a logical object are provided. In one such embodiment, by way of example only, incoming data is divided corresponding to a logical data object into a plurality of independent streams, associating each data chunk of a plurality of obtained data chunks with a corresponding stream among the plurality of independent streams. At least one of the obtained data chunks and derivatives thereof is sequentially accommodated in accordance with an order the obtained chunks are received, while keeping the association with the corresponding streams. A global index is generated as a single meta-data stream accommodated in the logical data object and comprising information common to the plurality of independent streams and related to mapping between data in the logical data object and the obtained data chunks. | 07-21-2011 |
20110208917 | DATA PROCESSING CIRCUIT WITH CACHE AND INTERFACE FOR A DETACHABLE DEVICE - A processor ( | 08-25-2011 |
20110219192 | PERFORMING A DATA WRITE ON A STORAGE DEVICE - A method of performing a data write on a storage device comprises instructing a device driver for the device to perform a write to the storage device, registering the device driver as a transaction participant with a transaction co-ordinator, executing a flashcopy of the storage device, performing the write on the storage device, and performing a two-phase commit between device driver and transaction co-ordinator. Preferably, the method comprises receiving an instruction to perform a rollback, and reversing the data write according to the flashcopy. In a further refinement, a method of scheduling a flashcopy of a storage device comprises receiving an instruction to perform a flashcopy, ascertaining the current transaction in relation to the device, registering the device driver for the device as a transaction participant in the current transaction with a transaction co-ordinator, receiving a transaction complete indication from the co-ordinator, and executing the flashcopy for the device. | 09-08-2011 |
20110225370 | NON-VOLATILE STORAGE DEVICE, ACCESS DEVICE, AND NON-VOLATILE STORAGE SYSTEM - When multiple pieces of content data are being recorded continuously to a nonvolatile storage device having page cache function, a preparation time before starting next content data recording is reduced. When a cache releasing section of a nonvolatile storage device ( | 09-15-2011 |
20110252201 | SMART FLUSHING OF DATA TO BACKUP STORAGE - A storage system, including: (a) a primary storage entity utilized for storing a data-set of the storage system; (b) a secondary storage entity utilized for backing-up the data within the primary storage entity; (c) a flushing management module adapted to identify within the primary storage entity two groups of dirty data blocks, each group is comprised of dirty data blocks which are arranged within the secondary storage entity in a successive sequence, and to further identify within the primary storage entity a further group of backed-up data blocks which are arranged within the secondary storage entity in a successive sequence intermediately in-between the two identified groups of dirty data blocks; and (d) said flushing management module is adapted to combine the group of backed-up data blocks together with the two identified groups of dirty data blocks to form a successive extended flush sequence and to destage it to the secondary storage entity. | 10-13-2011 |
20110283066 | Information Processing Apparatus and Driver - According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. The driver is configured to write data into the second storage and read data from the second storage in units of predetermined blocks using the first storage as a cache for the second storage. The driver is further configured to reserve a cache area in the memory, between the buffer area and the first external storage, and between the buffer area and the second storage. The driver is further configured to manage the cache area in units of the predetermined blocks. | 11-17-2011 |
20110307667 | MEMORY SYSTEM - A memory system according to an embodiment of the present invention comprises: a first management table that manages addresses concerning the data written in a first storing area; and a second management table that manages, in an address unit of a second management unit, information indicating temporal order of the data stored in the first storing area and manages, for each of addresses in a second management unit, number-of-valid-data information indicating a number of data in the first management unit included in the addresses in the second management unit. | 12-15-2011 |
20110320732 | USER-CONTROLLED TARGETED CACHE PURGE - User-controlled targeted cache purging includes receiving a request to perform an operation to purge data from a cache, the request including an index identifier identifying an index associated with the cache. The index specifies a portion of the cache to be purged. The user-controlled targeted cache purging also includes purging the data from the cache, and providing notification of successful completion of the operation. | 12-29-2011 |
20110320733 | CACHE MANAGEMENT AND ACCELERATION OF STORAGE MEDIA - Examples of described systems utilize a cache media in one or more computing devices that may accelerate access to other storage media. A solid state drive may be used as the local cache media. In some embodiments, the solid state drive may be used as a log structured cache, may employ multi-level metadata management, may use read and write gating. | 12-29-2011 |
20110320734 | SYSTEM AND METHOD FOR SUPPORTING MUTABLE OBJECT HANDLING - A computer-implemented method and system can support mutable object handling. The system comprises a cache space that is capable of storing one or more mutable cache objects, and one or more cached object graphs. Each said mutable cache object is reachable via one or more retrieval paths in the one or more cached object graph. The system further comprises a mutable-handling decorator that maintains an internal instance map that transparently translates between the one or more cached object graphs and the one or more mutable cache objects stored in the cache space. | 12-29-2011 |
20120047332 | Combining Write Buffer with Dynamically Adjustable Flush Metrics - In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness. | 02-23-2012 |
20120072669 | COMPUTER-READABLE, NON-TRANSITORY MEDIUM STORING MEMORY ACCESS CONTROL PROGRAM, MEMORY ACCESS CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS - A method of causing an information processing apparatus to execute, the method including: performing a management procedure to accept addresses of respective page tables generated for each of operation modes from an operating system that manages the virtual address space and to associate the addresses with the operating system to be recorded in page table correspondence information storage; executing a control procedure to set a second access right indicating a value lower than the first access right in accordance with the operation mode of the operating system; and processing a processing procedure to cause the memory management device to execute a flush of a translation look-aside buffer, and to set the second access right indicating a value for validating the first access right, wherein the memory management device performs a control on the memory access while the second access right is prioritized over the first access right. | 03-22-2012 |
20120124294 | APPARATUS, SYSTEM, AND METHOD FOR DESTAGING CACHED DATA - An apparatus, system, and method are disclosed for satisfying storage requests while destaging cached data. A monitor module samples a destage rate for a nonvolatile solid-state cache, a total cache write rate for the cache, and a dirtied data rate. The dirtied data rate comprises a rate at which write operations increase an amount of dirty data in the cache. A target module determines a target cache write rate for the cache based on the destage rate, the total cache write rate, and the dirtied data rate to target a destage write ratio. The destage write ratio comprises a predetermined ratio between the dirtied data rate and the destage rate. A rate enforcement module enforces the target cache write rate such that the total cache write rate satisfies the target cache write rate. | 05-17-2012 |
20120159079 | MEMORY MODEL FOR HARDWARE ATTRIBUTES WITHIN A TRANSACTIONAL MEMORY SYSTEM - A method and apparatus for providing a memory model for hardware attributes to support transactional execution is herein described. Upon encountering a load of a hardware attribute, such as a test monitor operation to load a read monitor, write monitor, or buffering attribute, a fault is issued in response to a loss field indicating the hardware attribute has been lost. Furthermore, dependency actions, such as blocking and forwarding, are provided for the attribute access operations based on address dependency and access type dependency. As a result, different scenarios for attribute loss and testing thereof are allowed and restricted in a memory model. | 06-21-2012 |
20120173822 | System and method for storing data off site - A system and method for efficiently storing data both on-site and off-site in a cloud storage system. Data read and write requests are received by a cloud data storage system. The cloud storage system has at least three data storage layers. A first high-speed layer, a second efficient storage layer, and a third off-site storage layer. The first high-speed layer stores data in raw data blocks. The second efficient storage layer divides data blocks from the first layer into data slices and eliminates duplicate data slices. The third layer stores data slices at an off-site location. | 07-05-2012 |
20120198175 | APPARATUS, SYSTEM, AND METHOD FOR MANAGING EVICTION OF DATA - An apparatus, system, and method are disclosed for managing eviction of data. A grooming cost module determines a grooming cost for a selected region of a nonvolatile solid-state cache. The grooming cost includes a cost of evicting the selected region of the nonvolatile solid-state cache relative to other regions. A grooming candidate set module adds the selected region to a grooming candidate set in response to the grooming cost satisfying a grooming cost threshold. A low cost module selects a low cost region within the grooming candidate set. A groomer module recovers storage capacity of the low cost region. | 08-02-2012 |
20120226871 | MULTIPLE-CLASS PRIORITY-BASED REPLACEMENT POLICY FOR CACHE MEMORY - This invention is a method and system for replacing an entry in a cache memory (replacement policy). The cache is divided into a high-priority class and a low-priority class. Upon a request for information such as data, an instruction, or an address translation, the processor searches the cache. If there is a cache miss, the processor locates the information elsewhere, typically in memory. The found information replaces an existing entry in the cache. The entry selected for replacement (eviction) is chosen from within the low-priority class using a FIFO algorithm. Upon a cache hit, the processor performs a read, write, or execute using or upon the information. If the performed instruction was a “write”, the information is placed into the high-priority class. If the high-priority class is full, an entry within the high-priority class is selected for removal based on a FIFO algorithm, and re-classified into the low-priority class. | 09-06-2012 |
20120233409 | MANAGING SHARED MEMORY USED BY COMPUTE NODES - A technology can be provided for managing shared memory used by a plurality of compute nodes. An example system can include a shared globally addressable memory to enable access to shared data by the plurality of compute nodes. A memory interface can process memory requests sent to the shared globally addressable memory from the plurality of processors. A memory write module can be included for the memory interface to allocate memory locations in the shared globally addressable memory and write read-only data to the globally addressable memory from a writing compute node. In addition, a read module for the memory interface can map read-only data in the globally addressable shared memory as read-only for subsequent accesses by the plurality of compute nodes. | 09-13-2012 |
20120254548 | ALLOCATING CACHE FOR USE AS A DEDICATED LOCAL STORAGE - A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage. | 10-04-2012 |
20120278559 | PERFORMING A DATA WRITE ON A STORAGE DEVICE - A method of performing a data write on a storage device comprises instructing a device driver for the device to perform a write to the storage device, registering the device driver as a transaction participant with a transaction co-ordinator, executing a flashcopy of the storage device, performing the write on the storage device, and performing a two-phase commit between device driver and transaction co-ordinator. Preferably, the method comprises receiving an instruction to perform a rollback, and reversing the data write according to the flashcopy. In a further refinement, a method of scheduling a flashcopy of a storage device comprises receiving an instruction to perform a flashcopy, ascertaining the current transaction in relation to the device, registering the device driver for the device as a transaction participant in the current transaction with a transaction co-ordinator, receiving a transaction complete indication from the co-ordinator, and executing the flashcopy for the device. | 11-01-2012 |
20120303903 | System, Method, and Computer Program Product for Modeling Changes to Large Scale Datasets - A system, method, and computer program product for modeling, the user appears to have a body of information in a data structure that can be manipulated independently of an underlying database. In an embodiment of the invention, the data structure is an entity cache. | 11-29-2012 |
20120324170 | Read-Copy Update Implementation For Non-Cache-Coherent Systems - A technique for implementing read-copy update in a shared-memory computing system having two or more processors operatively coupled to a shared memory and to associated incoherent caches that cache copies of data stored in the memory. According to example embodiments disclosed herein, cacheline information for data that has been rendered obsolete due to a data update being performed by one of the processors is recorded. The recorded cacheline information is communicated to one or more of the other processors. The one or more other processors use the communicated cacheline information to flush the obsolete data from all incoherent caches that may be caching such data. | 12-20-2012 |
20120331233 | False Sharing Detection Logic for Performance Monitoring - A mechanism is provided for detecting false sharing misses. Responsive to performing either an eviction or an invalidation of a cache line in a cache memory of the data processing system, a determination is made as to whether there is an entry associated with the cache line in a false sharing detection table. Responsive to the entry associated with the cache line existing in the false sharing detection table, a determination is made as to whether an overlap field associated with the entry is set. Responsive to the overlap field failing to be set, identification is made that a false sharing coherence miss has occurred. A first signal is then sent to a performance monitoring unit indicating the false sharing coherence miss. | 12-27-2012 |
20120331234 | CACHE MEMORY AND CACHE MEMORY CONTROL UNIT - Data transfer between processors is efficiently performed in a multiprocessor including a shared cache memory. Each entry in a tag storage section | 12-27-2012 |
20130024623 | METHOD AND APPARATUS FOR HIGH SPEED CACHE FLUSHING IN A NON-VOLATILE MEMORY - An invention is provided for performing flush cache in a non-volatile memory. The invention includes maintaining a plurality of free memory blocks within a non-volatile memory. When a flush cache command is issued, a flush cache map is examined to obtain a memory address of a memory block in the plurality of free memory blocks within the non-volatile memory. The flush cache map includes a plurality of entries, each entry indicating a memory block of the plurality of free memory blocks. Then, a cache block is written to a memory block at the obtained memory address within the non-volatile memory. In this manner, when a flush cache command is received, the flush cache map allows cache blocks to be written to free memory blocks in the non-volatile memory without requiring a non-volatile memory search for free blocks or requiring erasing of memory blocks storing old data. | 01-24-2013 |
20130036271 | DYNAMIC INDEX SELECTION IN A HARDWARE CACHE - Systems and methods are disclosed for improving the performance of cache memory in a computer system by dynamically selecting an index for caching main memory while an application is running. A disclosed example of a memory system includes a cache including a data array, a primary tag array, and at least one secondary tag array. A currently selected index is used to index data bits to the data array and tag bits to the primary tag array. The performance of at least one candidate index is evaluated by indexing tag bits to the secondary tag array, without caching any data using the candidate index while the candidate index is under evaluation. If the candidate index has a better hit rate than the currently selected index, the memory system switches to using the candidate index to cache data. | 02-07-2013 |
20130086329 | ALLOCATING CACHE FOR USE AS A DEDICATED LOCAL STORAGE - A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage. | 04-04-2013 |
20130103906 | Combining Write Buffer with Dynamically Adjustable Flush Metrics - In an embodiment, a combining write buffer is configured to maintain one or more flush metrics to determine when to transmit write operations from buffer entries. The combining write buffer may be configured to dynamically modify the flush metrics in response to activity in the write buffer, modifying the conditions under which write operations are transmitted from the write buffer to the next lower level of memory. For example, in one implementation, the flush metrics may include categorizing write buffer entries as “collapsed.” A collapsed write buffer entry, and the collapsed write operations therein, may include at least one write operation that has overwritten data that was written by a previous write operation in the buffer entry. In another implementation, the combining write buffer may maintain the threshold of buffer fullness as a flush metric and may adjust it over time based on the actual buffer fullness. | 04-25-2013 |
20130111144 | EFFICIENT MEMORY MANAGEMENT IN SOFTWARE CACHES | 05-02-2013 |
20130111145 | MAPPING OF VALID AND DIRTY FLAGS IN A CACHING SYSTEM | 05-02-2013 |
20130151786 | LOGICAL BUFFER POOL EXTENSION - A method for logical buffer pool extension identifies a page in a memory for eviction, and analyzes characteristics of the page to form a differentiated page. The characteristics of the page include descriptors that include a workload type, a page weight, a page type, frequency of access and timing of most recent access. The method also identifies a target location for the differentiated page from a set of locations including a fastcache storage and a hard disk storage to form an identified target location. The method further selects an eviction operation from a set of eviction operations using the characteristics of the differentiated page and the identified target location. The differentiated page is written to the identified target location using the selected eviction operation, where the differentiated page is written only to the fastcache storage. | 06-13-2013 |
20130205094 | EFFICIENT TRACK DESTAGE IN SECONDARY STORAGE - For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. | 08-08-2013 |
20130227221 | CACHE ACCESS ANALYZER - A performance monitor records performance information for tagged instructions being executed at an instruction pipeline. For instructions resulting in a load or store operation, a cache access analyzer can decompose the address associated with the operation to determine which cache line, if any, of a cache is accessed by the operation, and which portion of the cache line is requested by the operation. The cache access analyzer records the cache line portion in a data record, and, in response to a change in instruction being executed, stores the data record for subsequent analysis. | 08-29-2013 |
20130246711 | System and Method for Implementing a Hierarchical Data Storage System - A system and method for efficiently storing data both on-site and off-site in a cloud storage system. Data read and write requests are received by a cloud data storage system. The cloud storage system has at least three data storage layers. A first high-speed layer, a second efficient storage layer, and a third off-site storage layer. The first high-speed layer stores data in raw data blocks. The second efficient storage layer divides data blocks from the first layer into data slices and eliminates duplicate data slices. The third layer stores data slices at an off-site location. | 09-19-2013 |
20130262775 | Cache Management for Memory Operations - Embodiments of the present invention provides for the execution of threads and/or workitems on multiple processors of a heterogeneous computing system in a manner that they can share data correctly and efficiently. Disclosed method, system, and article of manufacture embodiments include, responsive to an instruction from a sequence of instructions of a work-item, determining an ordering of visibility to other work-items of one or more other data items in relation to a particular data item, and performing at least one cache operation upon at least one of the particular data item or the other data items present in any one or more cache memories in accordance with the determined ordering. The semantics of the instruction includes a memory operation upon the particular data item. | 10-03-2013 |
20130262776 | Managing Coherent Memory Between an Accelerated Processing Device and a Central Processing Unit - Existing multiprocessor computing systems often have insufficient memory coherency and, consequently, are unable to efficiently utilize separate memory systems. Specifically, a CPU cannot effectively write to a block of memory and then have a GPU access that memory unless there is explicit synchronization. In addition, because the GPU is forced to statically split memory locations between itself and the CPU, existing multiprocessor computing systems are unable to efficiently utilize the separate memory systems. Embodiments described herein overcome these deficiencies by receiving a notification within the GPU that the CPU has finished processing data that is stored in coherent memory, and invalidating data in the CPU caches that the GPU has finished processing from the coherent memory. Embodiments described herein also include dynamically partitioning a GPU memory into coherent memory and local memory through use of a probe filter. | 10-03-2013 |
20130297884 | ENHANCING DATA PROCESSING PERFORMANCE BY CACHE MANAGEMENT OF FINGERPRINT INDEX - Various embodiments for improving hash index key lookup caching performance in a computing environment are provided. In one embodiment, for a cached fingerprint map having a plurality of entries corresponding to a plurality of data fingerprints, reference count information is used to determine a length of time to retain the plurality of entries in cache. Those of the plurality of entries having a higher reference counts are retained longer than those having lower reference counts. | 11-07-2013 |
20130326149 | Write Cache Management Method and Apparatus - A method for destaging data from a memory of a storage controller to a striped volume is provided. The method includes determining if a stripe should be destaged from a write cache of the storage controller to the striped volume, destaging a partial stripe if a full stripe write percentage is less than a full stripe write affinity value, and destaging a full stripe if the full stripe write percentage is greater than the full stripe write affinity value. The full stripe write percentage includes a full stripe count divided by the sum of the full stripe count and a partial stripe count. The full stripe count is the number of stripes in the write cache where all chunks of a stripe are dirty. The partial stripe count is the number of stripes where at least one chunk but less than all chunks of the stripe are dirty. | 12-05-2013 |
20130326150 | PROCESS FOR MAINTAINING DATA WRITE ORDERING THROUGH A CACHE - A cache is maintained with write order numbers that indicate orders of writes into the cache, so that periodic partial flushes of the cache can be executed while maintaining write order consistency. A method of storing data into the cache includes receiving a request to write data into the cache, identifying lines in the cache for storing the data, writing the data into the lines of the cache, storing a write order number, and associating the write order number with the lines of the cache. A method of flushing a cache having cache lines associated with write order numbers includes the steps of identifying lines in the cache that are associated with either a selected write order number or a write order number that is less than the selected write order number, and flushing data stored in the identified lines to a persistent storage. | 12-05-2013 |
20140019688 | Solid State Drives as a Persistent Cache for Database Systems - Disclosed herein are systems, methods, and computer readable storage media for a database system using solid state drives as a second level cache. A database system includes random access memory configured to operate as a first level cache, solid state disk drives configured to operate as a persistent second level cache, and hard disk drives configured to operate as disk storage. The database system also includes a cache manager configured to receive a request for a data page and determine whether the data page is in cache or disk storage. If the data page is on disk, or in the second level cache, it is copied to the first level cache. If copying the data page results in an eviction, the evicted data page is copied to the second level cache. At checkpoint, dirty pages stored in the second level cache are flushed in place in the second level cache. | 01-16-2014 |
20140032850 | Transparent Virtualization of Cloud Storage - Embodiments present a virtual disk image to applications such as virtual machines (VMs) executing on a computing device. The virtual disk image corresponds to one or more subparts of binary large objects (blobs) of data stored by a cloud service, and is implemented in a log structured format. Grains of the virtual disk image are cached by the computing device. The computing device caches only a subset of the grains and performs write operations without blocking the applications to reduce storage latency perceived by the applications. Some embodiments enable the applications that lack enterprise class storage to benefit from enterprise class cloud storage services. | 01-30-2014 |
20140032851 | RANDOMIZED PAGE WEIGHTS FOR OPTIMIZING BUFFER POOL PAGE REUSE - In general, the disclosure is directed to techniques for choosing which pages to evict from the buffer pool to make room for caching additional pages in the context of a database table scan. A buffer pool is maintained in memory. A fraction of pages of a table to persist in the buffer pool are determined. A random number is generated as a decimal value of 0 to 1 for each page of the table cached in the buffer pool. If the random number generated for a page is less than the fraction, the page is persisted in the buffer pool. If the random number generated for a page is greater than the fraction, the page is included as a candidate for eviction from the buffer pool. | 01-30-2014 |
20140032852 | RANDOMIZED PAGE WEIGHTS FOR OPTIMIZING BUFFER POOL PAGE REUSE - In general, the disclosure is directed to techniques for choosing which pages to evict from the buffer pool to make room for caching additional pages in the context of a database table scan. A buffer pool is maintained in memory. A fraction of pages of a table to persist in the buffer pool are determined. A random number is generated as a decimal value of 0 to 1 for each page of the table cached in the buffer pool. If the random number generated for a page is less than the fraction, the page is persisted in the buffer pool. If the random number generated for a page is greater than the fraction, the page is included as a candidate for eviction from the buffer pool. | 01-30-2014 |
20140040560 | All Invalidate Approach for Memory Management Units - An input/output memory management unit (IOMMU) having an “invalidate all” command available to clear the contents of cache memory is presented. The cache memory provides fast access to address translation data that has been previously obtained by a process. A typical cache memory includes device tables, page tables and interrupt remapping entries. Cache memory data can become stale or be compromised from security breaches or malfunctioning devices. In these circumstances, a rapid approach to clearing cache memory content is provided. | 02-06-2014 |
20140040561 | HANDLING CACHE WRITE-BACK AND CACHE EVICTION FOR CACHE COHERENCE - A method implemented by a computer system comprising a first memory agent and a second memory agent coupled to the first memory agent, wherein the second memory agent has access to a cache comprising a cache line, the method comprising changing a state of the cache line by the second memory agent, and sending a non-snoop message from the second memory agent to the first memory agent via a communication channel assigned to snoop responses, wherein the non-snoop message informs the first memory agent of the state change of the cache line. | 02-06-2014 |
20140047189 | Optimizing Write and Wear Performance for a Memory - Determining and using the ideal size of memory to be transferred from high speed memory to a low speed memory may result in speedier saves to the low speed memory and a longer life for the low speed memory. | 02-13-2014 |
20140059298 | Snapshot Coordination - In one embodiment, a method performed by one or more computing devices includes receiving at a host cache, a first request to prepare a volume of the host cache for creating a snapshot of a cached logical unit number (LUN), the request indicating that a snapshot of the cached LUN will be taken, preparing, in response to the first request, the volume of the host cache for creating the snapshot of the cached LUN depending on a mode of the host cache, receiving, at the host cache, a second request to create the snapshot of the cached LUN, and in response to the second request, creating, at the host cache, the snapshot of the cached LUN. | 02-27-2014 |
20140068196 | METHOD AND SYSTEM FOR SELF-TUNING CACHE MANAGEMENT - Web objects, such as media files are sent through an adaptation server which includes a transcoder for adapting forwarded objects according to profiles of the receiving destinations, and a cache memory for caching frequently requested objects, including their adapted versions. The probability of additional requests for the same object before the object expires, is assessed by tracking hits. Only objects having experienced hits in excess of a hit threshold are cached, the hit threshold being adaptively adjusted based on the capacity of the cache, and the space required to store cached media files. Expired objects are collected in a list, and may be periodically ejected from the cache, or when the cache is nearly full. | 03-06-2014 |
20140068197 | SYSTEMS, METHODS, AND INTERFACES FOR ADAPTIVE CACHE PERSISTENCE - A storage module may be configured to service I/O requests according to different persistence levels. The persistence level of an I/O request may relate to the storage resource(s) used to service the I/O request, the configuration of the storage resource(s), the storage mode of the resources, and so on. In some embodiments, a persistence level may relate to a cache mode of an I/O request. I/O requests pertaining to temporary or disposable data may be serviced using an ephemeral cache mode. An ephemeral cache mode may comprise storing I/O request data in cache storage without writing the data through (or back) to primary storage. Ephemeral cache data may be transferred between hosts in response to virtual machine migration. | 03-06-2014 |
20140075124 | Selective Delaying of Write Requests in Hardware Transactional Memory Systems - Techniques for conflict detection in hardware transactional memory (HTM) are provided. In one aspect, a method for detecting conflicts in HTM includes the following steps. Conflict detection is performed eagerly by setting read and write bits in a cache as transactions having read and write requests are made. A given one of the transactions is stalled when a conflict is detected whereby more than one of the transactions are accessing data in the cache in a conflicting way. An address of the conflicting data is placed in a predictor. The predictor is queried whenever the write requests are made to determine whether they correspond to entries in the predictor. A copy of the data corresponding to entries in the predictor is placed in a store buffer. The write bits in the cache are set and the copy of the data in the store buffer is merged in at transaction commit. | 03-13-2014 |
20140082295 | DETECTION OF OUT-OF-BAND ACCESS TO A CACHED FILE SYSTEM - A network attached storage (NAS) caching appliance, system, and associated method to detect out-of-band accesses to a networked file system. | 03-20-2014 |
20140089596 | Read-Copy Update Implementation For Non-Cache-Coherent Systems - A technique for implementing read-copy update in a shared-memory computing system having two or more processors operatively coupled to a shared memory and to associated incoherent caches that cache copies of data stored in the memory. According to example embodiments disclosed herein, cacheline information for data that has been rendered obsolete due to a data update being performed by one of the processors is recorded. The recorded cacheline information is communicated to one or more of the other processors. The one or more other processors use the communicated cacheline information to flush the obsolete data from all incoherent caches that may be caching such data. | 03-27-2014 |
20140095800 | SYSTEM CACHE WITH STICKY REMOVAL ENGINE - Methods and apparatuses for releasing the sticky state of cache lines for one or more group IDs. A sticky removal engine walks through the tag memory of a system cache looking for matches with a first group ID which is clearing its cache lines from the system cache. The engine clears the sticky state of each cache line belonging to the first group ID. If the engine receives a release request for a second group ID, the engine records the current index to log its progress through the tag memory. Then, the engine continues its walk through the tag memory looking for matches with either the first or second group ID. The engine wraps around to the start of the tag memory and continues its walk until reaching the recorded index for the second group ID. | 04-03-2014 |
20140095801 | SYSTEM AND METHOD FOR RETAINING COHERENT CACHE CONTENTS DURING DEEP POWER-DOWN OPERATIONS - A system, method, and computer program product for retaining coherent cache contents during deep power-down operations, and reducing the low-power state entry and exit overhead to improve processor energy efficiency and performance. The embodiments flush or clean the Modified-state lines from the cache before entering a deep low-power state, and then implement a deferred snoop strategy while in the powered-down state. Upon existing the powered-down state, the embodiments process the deferred snoops. A small additional cache and a snoop filter (or other cache-tracking structure) may be used along with additional logic to retain cache contents coherently through deep power-down operations, which may span multiple low-power states. | 04-03-2014 |
20140108736 | SYSTEM AND METHOD FOR REMOVING DATA FROM PROCESSOR CACHES IN A DISTRIBUTED MULTI-PROCESSOR COMPUTER SYSTEM - A processor ( | 04-17-2014 |
20140115259 | Methods And Apparatuses For Controlling Thread Contention - An apparatus comprises a plurality of cores and a controller coupled to the cores. The controller is to lower an operating point of a first core if a first number based on processor clock cycles per instruction (CPI) associated with a second core is higher than a first threshold. The controller is operable to increase the operating point of the first core if the first number is lower than a second threshold. | 04-24-2014 |
20140136792 | PREDICTIVE CACHE REPLACEMENT - Systems and methods for predictive cache replacement policies are provided. In particular, some embodiments dynamically capture and predict access patterns of data to determine which data should be evicted from the cache. A novel tree data structure can be dynamically built that allows for immediate use in the identification of developing patterns and the eviction determination. In some cases, the data can be dynamically organized into histograms, strings, and other representations allowing traditional analysis techniques to be applied. Data organized into histogram-like structures can also be converted into strings allowing for well-known string pattern recognition analysis. The pattern recognition and prediction techniques disclosed also have applications outside of caching. | 05-15-2014 |
20140136793 | SYSTEM AND METHOD FOR REDUCED CACHE MODE - A system and method are described for dynamically changing the size of a computer memory such as level 2 cache as used in a graphics processing unit. In an embodiment, a relatively large cache memory can be implemented in a computing system so as to meet the needs of memory intensive applications. But where cache utilization is reduced, the capacity of the cache can be reduced. In this way, power consumption is reduced by powering down a portion of the cache. | 05-15-2014 |
20140156943 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus includes a detection unit configured to detect a damaged file from files stored in a cache area, a determination unit configured to determine whether the damaged file detected by the detection unit is restorable, a restoration unit configured to, if the determination unit determines that the damaged file is restorable, delete every restorable file in the cache area including the damaged file and restore the deleted file in the cache area, and an initialization unit configured to, if the determination unit determines that the damaged file is not restorable, delete every file in the cache area and initialize the cache area. | 06-05-2014 |
20140164710 | VIRTUAL MACHINES FAILOVER - Disclosed is a computer system ( | 06-12-2014 |
20140173214 | Retention priority based cache replacement policy - A data processing system includes a cache memory | 06-19-2014 |
20140173215 | METHODS AND SYSTEMS FOR PROVISIONING A BOOTABLE IMAGE ON TO AN EXTERNAL DRIVE - The present invention relates to a method of optimizing the provisioning of a bootable image onto a storage device. In some embodiments, a host device executes a provisioning application to image a storage drive as a bootable drive. During the provisioning process, the storage device is configured to disguise its use of write caching during the provisioning process. In one embodiment, the storage device is configured to suppress forced unit access commands and cache flush commands for the provisioning application. In another embodiment, the storage device is configured to reject forced unit access commands. The storage device may disguise its use of write caching based on various criteria, such as a length of time, a counter, and the like. | 06-19-2014 |
20140173216 | Invalidation of Dead Transient Data in Caches - Embodiments include methods, systems, and articles of manufacture directed to identifying transient data upon storing the transient data in a cache memory, and invalidating the identified transient data in the cache memory. | 06-19-2014 |
20140181413 | METHOD AND SYSTEM FOR SHUTTING DOWN ACTIVE CORE BASED CACHES - A system and method are presented. Some embodiments include a processing unit, at least one memory coupled to the processing unit, and at least one cache coupled to the processing unit and divided into a series of blocks, wherein at least one of the series of cache blocks includes data identified as being in a modified state. The modified state data is flushed by writing the data to the at least one memory based on a write back policy and the aggressiveness of the policy is based on at least one factor including the number of idle cores, the proximity of the last cache flush, the activity of the thread associated with the data, and which cores are idle and if the idle core is associated with the data. | 06-26-2014 |
20140181414 | MECHANISMS TO BOUND THE PRESENCE OF CACHE BLOCKS WITH SPECIFIC PROPERTIES IN CACHES - A system and method for efficiently limiting storage space for data with particular properties in a cache memory. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, wherein a first bank is powered down. In response a write request to a second bank for data indicated to be stored in the powered down first bank, the cache controller determines a respective bypass condition for the data. If the bypass condition exceeds a threshold, then the cache controller invalidates any copy of the data stored in the second bank. If the bypass condition does not exceed the threshold, then the cache controller stores the data with a clean state in the second bank. The cache controller writes the data in a lower-level memory for both cases. | 06-26-2014 |
20140189246 | MEASURING APPLICATIONS LOADED IN SECURE ENCLAVES AT RUNTIME - Embodiments of an invention for measuring applications loaded in secure enclaves at runtime are disclosed. In one embodiment, a processor includes an instruction unit and an execution unit. The instruction unit is to receive an instruction to extend a first measurement of a secure enclave with a second measurement. The execution unit is to execute the instruction after initialization of the secure enclave. | 07-03-2014 |
20140201457 | IDENTIFYING AND RESOLVING CACHE POISONING - According to some embodiments, a method and apparatus are provided to receive, at a cache entity, a refresh request associated with a resource. A determination is made, via a processor, and based on the refresh request, to reload the resource from a server. The reloaded resource is replaced at the cache entity. | 07-17-2014 |
20140223104 | VIRTUAL CACHE DIRECTORY IN MULTI-PROCESSOR ARCHITECTURES - Technologies generally described herein relate to cache directories in multi-core processors. Various examples may include, methods, systems, and devices. A first tile may receive a request to transfer a thread from the first tile to a second tile. An instruction may be sent from the first tile to map a virtual cache identifier to identifiers of caches of the first and second tiles. The thread may be transferred from the first tile to the second tile. Thereafter, a request may be generated for a data block. After a determination that the data block is not stored in the second tile's cache, and that the virtual cache identifier is mapped to the first and second cache identifiers, a request may be sent for the data block to the first tile. | 08-07-2014 |
20140223105 | METHOD AND APPARATUS FOR CUTTING SENIOR STORE LATENCY USING STORE PREFETCHING - In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for cutting senior store latency using store prefetching. For example, in one embodiment, such means may include an integrated circuit or an out of order processor means that processes out of order instructions and enforces in-order requirements for a cache. Such an integrated circuit or out of order processor means further includes means for receiving a store instruction; means for performing address generation and translation for the store instruction to calculate a physical address of the memory to be accessed by the store instruction; and means for executing a pre-fetch for a cache line based on the store instruction and the calculated physical address before the store instruction retires. | 08-07-2014 |
20140237191 | METHODS AND APPARATUS FOR INTRA-SET WEAR-LEVELING FOR MEMORIES WITH LIMITED WRITE ENDURANCE - Efficient techniques are described for extending the usable lifetime for memories with limited write endurance. A technique for wear-leveling of caches addresses unbalanced write traffic on cache lines which cause heavily written cache lines to fail much fast than other lines in the cache. A counter is incremented for each write operation to a cache array. A line affected by a current write operation which caused the counter to meet a threshold is evicted from the cache rather than writing data to the affected line. A dynamic adjustment of the threshold can be made depending on the operating program. Updates to a current replacement policy pointer are stopped due to the counter meeting the threshold. | 08-21-2014 |
20140237192 | METHOD AND APPARATUS FOR CONSTRUCTING MEMORY ACCESS MODEL - Embodiments of the present invention provide a method and an apparatus for constructing a memory access model, and relate to the field of computers. The method includes: obtaining a page table corresponding to a process referencing a memory block, and clearing a Present bit included in each page table entry stored in the page table; and constructing a memory access model of the memory block according to the number of access times of each page in the memory block and time obtained through timing, where the memory access model at least includes the number of access times and an access frequency of each page in the memory block. The apparatus includes: a first obtaining module, a first monitoring module, a first increasing module, and a second obtaining module. The present invention can reduce the memory consumption and an impact on the system performance, and avoid a system breakdown. | 08-21-2014 |
20140244936 | MAINTAINING CACHE COHERENCY BETWEEN STORAGE CONTROLLERS - Systems and methods maintain cache coherency between storage controllers utilizing bitmap data. In one embodiment, a storage controller processes an I/O request for a logical volume from a host, and generates one or more cache entries in a cache memory that is based on the request. The storage controller identifies a backup storage controller for managing the logical volume, and generates bitmap data that identifies cache entries in the cache memory that have changed since synchronizing with the backup storage controller. The storage controller provides the bitmap data to the backup storage controller to allow the backup storage controller to synchronize its cache memory with the cache memory of the storage controller based on the bitmap data. | 08-28-2014 |
20140258637 | FLUSHING ENTRIES IN A NON-COHERENT CACHE - Techniques are provided for performing a flush operation in a non-coherent cache. In response to determining to perform a flush operation, a cache unit flushes certain data items. The flush operation may be performed in response to a lapse of a particular amount of time, such as a number of cycles, or an explicit flush instruction that does not indicate any cache entry or data item. The cache unit may store change data that indicates which entry stores a data item that has been modified but not yet been flushed. The change data may be used to identify the entries that need to be flushed. In one technique, a dirty cache entry that is associated with one or more relatively recent changes is not flushed during a flush operation. | 09-11-2014 |
20140258638 | Method and apparatus for efficient read cache operation - A method for providing efficient use of a read cache by a storage controller is provided. The method includes the storage controller receiving a read request from a host computer and determining if a host stream size is larger than a read cache size. The host stream size is a current cumulative size of all read requests in the host stream. If the host stream size is larger than the read cache size then migrating data to a first area of the read cache containing data that has been in the read cache for the longest time. If the host stream size is not larger than the read cache size then migrating data to a second area of the read cache containing data that has been in the read cache for the shortest time. The host stream is a consecutive group of sequential read requests from the host computer. | 09-11-2014 |
20140281257 | Caching Backed-Up Data Locally Until Successful Replication - A mechanism is provided for caching backed-up data locally until successful replication of the backed-up data. Responsive to an indication to back up one or more pieces of identified data from a local storage device, a determination is made as to whether a primary storage device is available. Responsive to the primary storage device being available, the one or more pieces of identified data are backed up to the primary storage device and a local replication cache. Responsive to the backed-up data being replicated from the primary storage device to a secondary storage device, the backed-up data is removed from the local replication cache. | 09-18-2014 |
20140281258 | DYNAMIC CACHING MODULE SELECTION FOR OPTIMIZED DATA DEDUPLICATION - Embodiments of the invention provide a method, system and computer program product for dynamic caching module selection for optimized data deduplication. In an embodiment of the invention, a method for dynamic caching module selection for optimized data deduplication is provided. The method includes receiving a request to retrieve data and classifying the request. The method also includes identifying from amongst multiple different caching modules each with a different configuration a particular caching module associated with the classification of the request. Finally, the method includes deduplicating the data in the identified caching module. | 09-18-2014 |
20140281259 | TRANSLATION LOOKASIDE BUFFER ENTRY SYSTEMS AND METHODS - Presented systems and methods can facilitate efficient information storage and tracking operations, including translation look aside buffer operations. In one embodiment, the systems and methods effectively allow the caching of invalid entries (with the attendant benefits e.g., regarding power, resource usage, stalls, etc), while maintaining the illusion that the TLBs do not in fact cache invalid entries (e.g., act in compliance with architectural rules). In one exemplary implementation, an “unreal” TLB entry effectively serves as a hint that the linear address in question currently has no valid mapping. In one exemplary implementation, speculative operations that hit an unreal entry are discarded; architectural operations that hit an unreal entry discard the entry and perform a normal page walk, either obtaining a valid entry, or raising an architectural fault. | 09-18-2014 |
20140281260 | ESTIMATING ACCESS FREQUENCY STATISTICS FOR STORAGE DEVICE - Techniques are disclosed relating to determining statistics associated with the storage of data on a medium. In one embodiment, a computing system maintains a management statistic for a storage device, and uses the management statistic as a proxy for a workload statistic for a storage block within the storage device. In some embodiments, the storage block is a first storage block included within a second storage block of the storage device. In one embodiment, the management statistic is a timestamp indicative of when a write operation was performed for the second storage block; the workload statistic is a write frequency of the first storage block. In one embodiment, the management statistic is a number of read operations performed for the second storage block; the using includes deriving, based on the number of read operation, a read frequency for the first storage block as the workload statistic. | 09-18-2014 |
20140281261 | INCREASED ERROR CORRECTION FOR CACHE MEMORIES THROUGH ADAPTIVE REPLACEMENT POLICIES - A system, processor and method to reduce the overall detectable unrecoverable FIT rate of a cache by reducing the residency time of dirty lines in a cache. This is accomplished through selectively choosing different replacement policies during execution based on the DUE FIT target of the system. System performance and power is minimally affected while effectively reducing the DUE FIT rate. | 09-18-2014 |
20140281262 | DYNAMIC CACHING MODULE SELECTION FOR OPTIMIZED DATA DEDUPLICATION - Embodiments of the invention provide a method, system and computer program product for dynamic caching module selection for optimized data deduplication. In an embodiment of the invention, a method for dynamic caching module selection for optimized data deduplication is provided. The method includes receiving a request to retrieve data and classifying the request. The method also includes identifying from amongst multiple different caching modules each with a different configuration a particular caching module associated with the classification of the request. Finally, the method includes deduplicating the data in the identified caching module. | 09-18-2014 |
20140281263 | REPLAYING MEMORY TRANSACTIONS WHILE RESOLVING MEMORY ACCESS FAULTS - One embodiment of the present invention is a parallel processing unit (PPU) that includes one or more streaming multiprocessors (SMs) and implements a replay unit per SM. Upon detecting a page fault associated with a memory transaction issued by a particular SM, the corresponding replay unit causes the SM, but not any unaffected SMs, to cease issuing new memory transactions. The replay unit then stores the faulting memory transaction and any faulting in-flight memory transaction in a replay buffer. As page faults are resolved, the replay unit replays the memory transactions in the replay buffer—removing successful memory transactions from the replay buffer—until all of the stored memory transactions have successfully executed. Advantageously, the overall performance of the PPU is improved compared to conventional PPUs that, upon detecting a page fault, stop performing memory transactions across all SMs included in the PPU until the fault is resolved. | 09-18-2014 |
20140281264 | MIGRATION COUNTERS FOR HYBRID MEMORIES IN A UNIFIED VIRTUAL MEMORY SYSTEM - Embodiments of the approaches disclosed herein include a subsystem that includes an access tracking mechanism configured to monitor access operations directed to a first memory and a second memory. The access tracking mechanism detects an access operation generated by a processor for accessing a first memory page residing on the second memory. The access tracking mechanism further determines that the first memory page is included in a first subset of memory pages residing on the second memory. The access tracking mechanism further locates, within a reference vector, a reference bit that corresponds to the first memory page, and sets the reference bit. One advantage of the present invention is that memory pages in a hybrid system migrate as needed to increase overall memory performance. | 09-18-2014 |
20140297962 | INSTRUCTIONS AND LOGIC TO PROVIDE ADVANCED PAGING CAPABILITIES FOR SECURE ENCLAVE PAGE CACHES - Instructions and logic provide advanced paging capabilities for secure enclave page caches. Embodiments include multiple hardware threads or processing cores, a cache to store secure data for a shared page address allocated to a secure enclave accessible by the hardware threads. A decode stage decodes a first instruction specifying said shared page address as an operand, and execution units mark an entry corresponding to an enclave page cache mapping for the shared page address to block creation of a new translation for either of said first or second hardware threads to access the shared page. A second instruction is decoded for execution, the second instruction specifying said secure enclave as an operand, and execution units record hardware threads currently accessing secure data in the enclave page cache corresponding to the secure enclave, and decrement the recorded number of hardware threads when any of the hardware threads exits the secure enclave. | 10-02-2014 |
20140297963 | PROCESSING DEVICE - When an invalidation request is inputted from another processing device, a cache controller registers a set of an invalidation request address which the invalidation request has and an identifier of the other processing device which outputted the invalidation request in an invalidation history table. When a central processing unit attempts to read data at a first address not stored in a cache memory, if the first address is registered in the invalidation history table, the cache controller outputs a coherent read request containing the first address to the other processing device indicated by the identifier of the other processing device which outputted the invalidation request corresponding to the first address, or if the first address is not registered in the invalidation history table, the cache controller outputs a coherent read request containing the first address to all other processing devices. | 10-02-2014 |
20140304478 | Space Reclamation of Objects in a Persistent Cache - Disclosed herein are methods and structures for a computer cache that includes its own garbage collection component that reclaims space occupied by free objects in the cache such that the cache avoids retaining deleted objects thereby increasing cache hit ratios and further permits short-lived dirty objects to be deleted without requiring them to be written back to an underlying store. | 10-09-2014 |
20140325159 | CONTROL OF PRE-FETCH TRAFFIC - Methods and systems for improved control of traffic generated by a processor are described. In an embodiment, when a device generates a pre-fetch request for a piece of data or an instruction from a memory hierarchy, the device includes a pre-fetch identifier in the request. This identifier flags the request as a pre-fetch request rather than a non-pre-fetch request, such as a time-critical request. Based on this identifier, the memory hierarchy can then issue an abort response at times of high traffic which suppresses the pre-fetch traffic, as the pre-fetch traffic is not fulfilled by the memory hierarchy. On receipt of an abort response, the device deletes at least a part of any record of the pre-fetch request and if the data/instruction is later required, a new request is issued at a higher priority than the original pre-fetch request. | 10-30-2014 |
20140359226 | ALLOCATION OF CACHE TO STORAGE VOLUMES - A technique for allocating a write cache allowed data size of a write cache from a plurality of write caches to each of a plurality of storage volumes, calculating a write cache utilization of the write cache for each of the respective storage volumes, wherein the write cache utilization is based on a write cache dirty data size of the write cache allocated to the respective storage volume divided by the write cache allowed data size of the write cache allocated to the respective storage volume, and adjusting the write cache allowed data size of the write cache allocated to storage volumes based on the write cache utilaztion of the write cache of the storage volumes. | 12-04-2014 |
20140359227 | COMPUTER SYSTEM AND CACHE CONTROL METHOD - A computer system, comprising: a server; and a storage system, the server including an operating system, the storage system including a storage control part, wherein the operating system is configured to: transmit a read request for first data to the storage system in a case of receiving the read request for the first data not stored in a server cache from an application; store the first data received from the storage system into the server cache, and wherein the storage control part is configured to: read the first data from the storage cache, transmit the read first data to the server, and invalidate the first data stored in the storage cache. | 12-04-2014 |
20140372704 | LEAST-RECENTLY-USED (LRU) TO FIRST-DIRTY-MEMBER DISTANCE-MAINTAINING CACHE CLEANING SCHEDULER - A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line. | 12-18-2014 |
20140372705 | LEAST-RECENTLY-USED (LRU) TO FIRST-DIRTY-MEMBER DISTANCE-MAINTAINING CACHE CLEANING SCHEDULER - A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line. | 12-18-2014 |
20140372706 | SYSTEM AND METHOD FOR DYNAMIC ALLOCATION OF UNIFIED CACHE TO ONE OR MORE LOGICAL UNITS - A system and method provide a unified cache in a Small Computer System Interface (SCSI) device which can be dynamically allocated to one or more Logical Units (LUs). A cache balancer module of the SCSI device can allocate the entire unified cache to a single LU, or divide the unified cache among multiple LUs. The cache entries for each LU can be further classified based on Quality of Service (QoS) traffic classes within each LU thereby improving the QoS performance. The system provides a cache allocation table that maintains a unified cache allocation status for each LU. | 12-18-2014 |
20140372707 | Wear Leveling in a Memory System - Embodiments are disclosed for replacing one or more pages of a memory to level wear on the memory. In one embodiment, a system includes a page fault handling function and a memory address mapping function. Upon receipt of a page fault, the page fault handling function maps an evicted virtual memory address to a stressed page and maps a stressed virtual memory address to a free page using the memory address mapping function. | 12-18-2014 |
20140372708 | SCHEDULER TRAINING FOR MULTI-MODULE BYTE CACHING - Embodiments of the present invention provide a method, system and computer program product for dynamic caching module selection for optimized data deduplication. In an embodiment of the invention, a method for dynamic caching module selection for optimized data deduplication is provided. The method includes processing historically relevant byte streams in each of a multiplicity of byte caching modules to populate a table of associations between different classifications of the historically relevant byte streams and correspondingly optimal ones of the multiplicity of the byte caching modules. The method also includes receiving a request to retrieve data from a data source and classifying the request. The method yet further includes consulting the table to identify, from amongst the multiplicity of byte caching modules, a particular byte caching module associated with the classification of the request. Finally, the method includes deduplicating the data in the identified byte caching module. | 12-18-2014 |
20140379990 | CACHE NODE PROCESSING - A technique for cache node processing that includes generating a cache node in response to a request to write data to storage devices. If logical block address (LBA) of the generated cache node is adjacent to LBA of cache nodes of a cache node list, then check if there are cache nodes that are sequential up to a predefined boundary. If there are cache nodes that are sequential up to the predefined boundary, then flush the data of the sequential cache nodes together as a group up to the predefined boundary. | 12-25-2014 |
20140379991 | LATCH-FREE, LOG-STRUCTURED STORAGE FOR MULTIPLE ACCESS METHODS - A data manager may include a data opaque interface configured to provide, to an arbitrarily selected page-oriented access method, interface access to page data storage that includes latch-free access to the page data storage. In another aspect, a swap operation may be initiated, of a portion of a first page in cache layer storage to a location in secondary storage, based on initiating a prepending of a partial swap delta record to a page state associated with the first page, the partial swap delta record including a main memory address indicating a storage location of a flush delta record that indicates a location in secondary storage of a missing part of the first page. In another aspect, a page manager may initiate a flush operation of a first page in cache layer storage to a location in secondary storage, based on atomic operations with flush delta records. | 12-25-2014 |
20140379992 | TWO HANDED INSERTION AND DELETION ALGORITHM FOR CIRCULAR BUFFER - Exemplary embodiments of the present invention disclose a method and system for selecting an eviction location of an item to evict and an insertion location for a new item in a circular buffer. In a step, an exemplary embodiment specifies an insertion location with an insertion pointer. In another step, an exemplary embodiment increments an access count of a first item. In another step, an exemplary embodiment moves an eviction pointer clockwise when specifying an insertion location for the new item and the circular buffer is in eviction mode. In another step, an exemplary embodiment decrements an access count of a second item. In another step, an exemplary embodiment moves the insertion pointer to maintain a constant clockwise distance to the eviction location. In another step, an exemplary embodiment evicts the second item with an access count of zero and inserts the new item counterclockwise to the insertion location. | 12-25-2014 |
20140379993 | INITIATION OF CACHE FLUSHES AND INVALIDATIONS ON GRAPHICS PROCESSORS - Methods and systems may provide for receiving, at a graphics processor, a workload from a host processor and using a kernel on the graphics processor to issue a thread group for execution of the workload on the graphics processor. Additionally, one or more coherency messages may be initiated, by the graphics processor, in response to a thread-related condition of one or more caches on the graphics processor. In one example, the thread-related condition is associated with the execution of the workload on the graphics processor and indicates that the one or more caches on the graphics processor are not coherent with a system memory associated with the host processor. | 12-25-2014 |
20140379994 | DATA TRANSFER DEVICE, DATA TRANSFER METHOD, AND COMPUTER DEVICE - A local-memory side data transfer unit increments the number of addresses, reads out data from a local memory, and stores the data into a cache memory of a remote-memory side data transfer unit. For preventing data mismatching with the local memory from being stored into the cache memory, a cache clearing operation is executed in units of an elapse of a round trip time period for data transfer between the local memory and the remote memory. Alternatively, the cache clearing operation is executed upon receipt of a signal notifying data transfer of data stored at a specified address. | 12-25-2014 |
20150019822 | System for Maintaining Dirty Cache Coherency Across Reboot of a Node - Nodes in a data storage system having redundant write caches identify when one node fails. A remaining active node stops caching new write operations, and begins flushing cached dirty data. Metadata pertaining to each piece of data flushed from the cache is recorded. Metadata pertaining to new write operations are also recorded a corresponding data flushed immediately when the new write operation involves data in the dirty data cache. When the failed node is restored, the restored node removes all data identified by the metadata from a write cache. Removing such data synchronizes the write cache with all remaining nodes without costly copying operations. | 01-15-2015 |
20150026410 | LEAST RECENTLY USED (LRU) CACHE REPLACEMENT IMPLEMENTATION USING A FIFO - A method and apparatus for calculating a victim way that is always the least recently used way. More specifically, in an m-set, n-way set associative cache, each way a cache set comprises a valid bit that indicates that the way contains valid data. The valid bit is set when a way is written and cleared upon being invalidated, e.g., via a snoop address, The cache system comprises a cache LRU circuit which comprises an LRU logic unit associated with each cache set. The LRU logic unit comprises a FIFO of n-depth (in certain embodiments, the depth corresponds to the number of ways in the cache) and m-width. The FIFO performs push, pop and collapse functions. Each entry in the FIFO contains the encoded way number that was last accessed. | 01-22-2015 |
20150026411 | CACHE SYSTEM FOR MANAGING VARIOUS CACHE LINE CONDITIONS - A cache controller configured to detect a wait type (i.e., a wait event) associated with an imprecise collision and/or contention event is disclosed. The cache controller is configured to operatively connect to a cache memory device, which is configured to store a plurality of cache lines. The cache controller is configured to detect a wait type due to an imprecise collision and/or collision event associated with a cache line. The cache controller is configured to cause transmission of a broadcast to one or more transaction sources (e.g., broadcast to the transaction sources internal to the cache controller) requesting the cache line indicating the transaction source can employ the cache line. | 01-22-2015 |
20150026412 | NON-BLOCKING QUEUE-BASED CLOCK REPLACEMENT ALGORITHM - One embodiment provides an eviction system for dynamically-sized caching comprising a non-blocking data structure for maintaining one or more data nodes. Each data node corresponds to a data item in a cache. Each data node comprises information relating to a corresponding data item. The eviction system further comprises an eviction module configured for removing a data node from the data structure, and determining whether the data node is a candidate for eviction based on information included in the data node. If the data node is not a candidate for eviction, the eviction module inserts the data node back into the data structure; otherwise the eviction module evicts the data node and a corresponding data item from the system and the cache, respectively. Data nodes of the data structure circulate through the eviction module until a candidate for eviction is determined. | 01-22-2015 |
20150046658 | CACHE ORGANIZATION AND METHOD - A method and information processing system with improved cache organization is provided. Each register capable of accessing memory has associated metadata, which contains the tag, way, and line for a corresponding cache entry, along with a valid bit, allowing a memory access which hits a location in the cache to go directly to the cache's data array, avoiding the need to look up the address in the cache's tag array. When a cache line is evicted, any metadata referring to the line is marked as invalid. By reducing the number of tag lookups performed to access data in a cache's data array, the power that would otherwise be consumed by performing tag lookups is saved, thereby reducing power consumption of the information processing system, and the cache area needed to implement a cache having a desired level of performance may be reduced. | 02-12-2015 |
20150067265 | System and Method for Partitioning of Memory Units into Non-Conflicting Sets - A system and method of operation exploit the limited associativity of a single cache set to force observable cache evictions and discover conflicts. Loads are issued to input memory addresses, one at a time, until a cache eviction is detected. After observing a cache eviction on a load from an address, that address is added to a data structure representing the current conflict set. The cache is then flushed, and loads are issued to all addresses in the current conflict set, so that all known conflicting addresses are accessed first, ensuring that the next cache miss will occur on a different conflicting address. The process is repeated, issuing loads from all input memory addresses, incrementally finding conflicting addresses, one by one. Memory addresses that conflict in the cache belong to the same partition, whereas memory addresses belonging to different partitions do not conflict. | 03-05-2015 |
20150074355 | EFFICIENT CACHING OF FILE SYSTEM JOURNALS - An apparatus includes a memory and a controller. The memory may be configured to implement a cache and store meta-data. The cache generally comprises one or more cache windows. Each of the one or more cache windows comprises a plurality of cache-lines configured to store information. Each of the plurality of cache-lines is associated with meta-data indicating one or more of a dirty state, an invalid state, and a partially dirty state. The controller is connected to the memory and may be configured to (i) detect an input/output (I/O) operation directed to a file system recovery log area, (ii) mark a corresponding I/O using a predefined hint value, and (iii) pass the corresponding I/O along with the predefined hint value to a caching layer. | 03-12-2015 |
20150081979 | STORAGE INTEGRATION FOR HOST-BASED WRITE-BACK CACHING - Techniques for enabling integration between a storage system and a host system that performs write-back caching are provided. In one embodiment, the host system can transmit to the storage system a command indicating that the host system intends to cache, in a write-back cache, writes directed to a range of logical block addresses (LBAs). The host system can further receive from the storage system a response indicating whether the command is accepted or rejected. If the command is accepted, the host system can initiate the caching of writes in the write-back cache. | 03-19-2015 |
20150081980 | METHOD AND APPARATUS FOR STORING A PROCESSOR ARCHITECTURAL STATE IN CACHE MEMORY - A method includes storing architectural state data associated with a processing unit in a cache memory using an allocate without fill mode. A system includes a processing unit, a cache memory, and a cache controller. The cache controller is to receive architectural state data associated with the processing unit and store at least a first portion of the architectural state data in the cache memory using a first fill mode responsive to a first value of a fill mode flag and store at least a second portion of the architectural state data in the cache memory using a second fill mode responsive to a second value of a fill mode flag, wherein the first fill mode differs from the second fill mode with respect to whether previous values of the architectural state data are retrieved prior to storing the first or second portions in the cache memory. | 03-19-2015 |
20150089147 | Maintenance Of Cache And Tags In A Translation Lookaside Buffer - A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Further, a collapsed TLB is an additional cache storing collapsed translations derived from the MTLB. Entries in the MTLB, the collapsed TLB, and other caches can be maintained for consistency. | 03-26-2015 |
20150095584 | UTILITY-BASED INVALIDATION PROPAGATION SCHEME SELECTION FOR DISTRIBUTED CACHE CONSISTENCY - A computerized method for dynamic consistency management of server side cache management units in a distributed cache, comprising: updating a server side cache management unit by a client; assigning each of a plurality of server side cache management units to one of a plurality of propagation topology groups according to an analysis of a plurality of cache usage measurements thereof, each of said propagation topology groups is associated with a different write request propagation scheme; and managing client update notifications of members of each of said propagation topology groups according to the respective said different write request propagation scheme which is associated therewith. | 04-02-2015 |
20150095585 | CONSISTENT AND EFFICIENT MIRRORING OF NONVOLATILE MEMORY STATE IN VIRTUALIZED ENVIRONMENTS - Updates to nonvolatile memory pages are mirrored so that certain features of a computer system, such as live migration of applications, fault tolerance, and high availability, will be available even when nonvolatile memory is local to the computer system. Mirroring may be carried out when a cache flush instruction is executed to flush contents of the cache into nonvolatile memory. In addition, mirroring may be carried out asynchronously with respect to execution of the cache flush instruction by retrieving content that is to be mirrored from the nonvolatile memory using memory addresses of the nonvolatile memory corresponding to target memory addresses of the cache flush instruction. | 04-02-2015 |
20150100737 | Method And Apparatus For Conditional Storing Of Data Using A Compare-And-Swap Based Approach - According to at least one example embodiment, a method and corresponding apparatus for conditionally storing data include initiating an atomic sequence by executing, by a core processor, an instruction/operation designed to initiate an atomic sequence. Executing the instruction designed to initiate the atomic sequence includes loading content associated with a memory location into a first cache memory, and maintaining an indication of the memory location and a copy of the corresponding content loaded. A conditional storing operation is then performed, the conditional storing operation includes a compare-and-swap operation, executed by a controller associated with a second cache memory, based on the maintained copy of the content and the indication of the memory location. | 04-09-2015 |
20150100738 | DYNAMICALLY DETERMINING A TRANSLATION LOOKASIDE BUFFER FLUSH PROMOTION THRESHOLD VALUE - A translation lookaside buffer (TLB) of a computing device is a cache of virtual to physical memory address translations. A TLB flush promotion threshold value indicates when all entries of the TLB are to be flushed rather than individual entries of the TLB. The TLB flush promotion threshold value is determined dynamically by the computing device by determining an amount of time it takes to flush and repopulate all entries of the TLB. A determination is then made as to the number of TLB entries that can be individually flushed and repopulated in that same amount of time. The threshold value is set based on (e.g., equal to) the number of TLB entries that can be individually flushed and repopulated in that amount of time. | 04-09-2015 |
20150100739 | Enhancing Lifetime of Non-Volatile Cache by Injecting Random Replacement Policy - A method, a system and a computer-readable medium for writing to a non-volatile cache memory are provided. The method maintains a write count associated with a set of memory locations. The method then selects a cache replacement policy based on the value of the write count and selecting a block within the set for writing data using the selected cache replacement policy. The selected cache replacement policy can introduce a randomized selection. | 04-09-2015 |
20150113224 | ATOMIC WRITE OPERATIONS FOR STORAGE DEVICES - Atomic write operations for storage devices are implemented by maintaining the data that would be overwritten in the cache until the write operation completes. After the write operation completes, including generating any related metadata, a checkpoint is created. After the checkpoint is created, the old data is discarded and the new data becomes the current data for the affected storage locations. If an interruption occurs prior to the creation of the checkpoint, the old data is recovered and any new is discarded. If an interruption occurs after the creation of the checkpoint, any remaining old data is discarded and the new data becomes the current data. Write logs that indicate the locations affected by in progress write operation are used in some implementations. If neither all of the new data nor all of the old data is recoverable, a predetermined pattern can be written into the affected locations. | 04-23-2015 |
20150134913 | METHOD AND APPARATUS FOR CLEANING FILES IN A MOBILE TERMINAL AND ASSOCIATED MOBILE TERMINAL - A method for cleaning files stored in a mobile terminal is disclosed. The mobile terminal receives a file cleaning instruction from a user. In response to the file cleaning instruction, the mobile terminal identifies cache files based on the cache files' associated information and past user activities on the cache files and groups the identified cache files and their associated information into multiple cache file categories. At least one of the multiple cache file categories is located in an extended storage device of the mobile terminal (e.g., a SD card). Next, the mobile terminal displays information of the multiple cache file categories on the display, each cache file category having an associated file cleaning option and cleans at least one of the multiple cache file categories from the mobile terminal in accordance with a user choice of the corresponding file cleaning option. | 05-14-2015 |
20150143055 | VIRTUAL MACHINE BACKUP - A computer system comprises a processor unit arranged to run a hypervisor running one or more virtual machines, a cache connected to the processor unit and comprising a plurality of cache rows, each cache row comprising a memory address, a cache line and an image modification flag and a memory connected to the cache and arranged to store an image of at least one virtual machine. The processor unit is arranged to define a log in the memory and the cache further comprises a cache controller arranged to set the image modification flag for a cache line modified by a virtual machine being backed up, periodically check the image modification flags and write only the memory address of the flagged cache rows in the defined log. The processor unit is further arranged to monitor the free space available in the defined log and to trigger an interrupt if the free space available falls below a specific amount. | 05-21-2015 |
20150293847 | METHOD AND APPARATUS FOR LOWERING BANDWIDTH AND POWER IN A CACHE USING READ WITH INVALIDATE - Ephemeral data stored in a cache is read when needed but is not written to system memory so as to save power and bandwidth. In an embodiment, a no-writeback bit associated with the ephemeral data is set in response to a read-no-writeback instruction. Data in a cache line for which its no-writeback bit has been set is not written back into system memory. Accordingly, when evicting cache lines, if a cache line has a no-writeback bit set, then the data in that cache line is discarded without being written back to system memory. | 10-15-2015 |
20150293848 | ENHANCING DATA PROCESSING PERFORMANCE BY CACHE MANAGEMENT OF FINGERPRINT INDEX - Various embodiments for improving hash index key lookup caching performance in a computing environment are provided. In one embodiment, for a cached fingerprint map having a plurality of entries corresponding to a plurality of data fingerprints, reference count information is used to determine a length of time to retain the plurality of entries in cache. Those of the plurality of entries having a higher reference counts are retained longer than those having lower reference counts. | 10-15-2015 |
20150309939 | SELECTIVE CACHE WAY-GROUP POWER DOWN - A method and apparatus for selectively powering down a portion of a cache memory includes determining a power down condition dependent upon a number of accesses to the cache memory. In response to the detection of the power down condition, selecting a group of cache ways included in the cache memory dependent upon a number of cache lines in each cache way that are also included in another cache memory. The method further includes locking and flushing the selected group of cache ways, and then activating a low power mode for the selected group of cache ways. | 10-29-2015 |
20150347301 | SYNCHRONIZING UPDATES OF PAGE TABLE STATUS INDICATORS AND PERFORMING BULK OPERATIONS - A synchronization capability to synchronize updates to page tables by forcing updates in cached entries to be made visible in memory (i.e., in in-memory page table entries). A synchronization instruction is used that ensures after the instruction has completed that updates to the cached entries that occurred prior to the synchronization instruction are made visible in memory. Synchronization may be used to facilitate memory management operations, such as bulk operations used to change a large section of memory to read-only, operations to manage a free list of memory pages, and/or operations associated with terminating processes. | 12-03-2015 |
20150347309 | Cache Reclamation - In an example implementation, a method includes receiving an indication to reclaim memory from a cache, the cache including a plurality of data buckets each configured to store one or more records and corresponding access bits. The method also includes selecting a data bucket from the cache, and processing the selected data bucket. Processing the selected data bucket includes determining access bits of the selected data bucket that are clear, and expunging data records corresponding to those access bits from the cache. Processing the selected data bucket also includes determining access bits of the selected data bucket that are set and do not correspond to records relevant to outstanding requests by a process utilizing the cache, and clearing those access bits. The method also includes repeating selecting and processing data buckets until a stop criterion is satisfied. | 12-03-2015 |
20150370712 | METHOD FOR PROCESSING ERROR DIRECTORY OF NODE IN CC-NUMA SYSTEM, AND NODE - A method for processing an error directory of a node in a cache coherence non-uniform memory access (CC-NUMA) system and a node are provided. The method effectively reduces a possibility of a breakdown of the system caused by accumulation of the error bits in the directory memory of the CC-NUMA system. The method comprises: when a quantity of bits of a correctable error of a directory stored in a directory memory of the node is greater than a preset threshold, controlling all processors in the CC-NUMA system to write dirty data in a corresponding cache back to a corresponding main memory, flush the dirty data, and directly flush clean data in the corresponding cache; and controlling the CC-NUMA system to enter a quiescent state, clearing a record stored in the directory memory to zero, and controlling, after the zero clearing is completed, the CC-NUMA system to exit the quiescent state. | 12-24-2015 |
20160004645 | TWO HANDED INSERTION AND DELETION ALGORITHM FOR CIRCULAR BUFFER - Exemplary embodiments of the present invention disclose a method, program product, and system for selecting an item to evict and an insertion location for a new item in a circular buffer. A computer moves, in a circular buffer, in the same direction both of i) an insertion pointer to a first buffer entry, and ii) an eviction pointer to a second buffer entry that includes an item. A constant number of buffer entries is maintained between the eviction pointer and the insertion pointer. The computer responds to an eviction of the item from the second buffer entry by inserting a new item into a third buffer entry. The third buffer entry is located such that the third buffer entry is pointed to by the eviction pointer before being pointed to by the insertion pointer, as the eviction pointer and the insertion pointer continue to move around the circular buffer. | 01-07-2016 |
20160019055 | RUNTIME PATCHING OF AN OPERATING SYSTEM (OS) WITHOUT STOPPING EXECUTION - Techniques for runtime patching of an OS without stopping execution of the OS are presented. When a patch function is needed, it is loaded into the OS code. Threads of the OS that are in kernel mode have a flag set and a jump is inserted at a location of an old function. When the old function is accessed, the jump uses a trampoline to check the flag, if the flag is set, processing returns to the old function; otherwise processing jumps to a given location of the patch. Flags are unset when exiting or entering the kernel mode. | 01-21-2016 |
20160026576 | IMPLEMENTING COHERENCY WITH REFLECTIVE MEMORY - Techniques for updating data in a reflective memory region of a first memory device are described herein. In one example, a method for updating data in a reflective memory region of a first memory device includes receiving an indication that data is to be flushed from a cache device to the first memory device. The method also includes detecting a memory address corresponding to the data is within the reflective memory region of the first memory device and sending data from the cache device to the first memory device with a flush operation. Additionally, the method includes determining that the data received by the first memory device is modified data. Furthermore, the method includes sending the modified data to a second memory device in a second computing system. | 01-28-2016 |
20160041911 | Push-Based Cache Invalidation Notification - In one embodiments, one or more first computing devices receive updated values for user data associated with a plurality of users; and for each of the user data for which an updated value has been received, determine one or more second systems that each have subscribed to be notified when the value of the user datum is updated and each have a pre-established relationship with the user associated with the user datum; and push notifications to the second systems indicating that the value of the user datum has been updated without providing the updated value for the user datum to the second systems. | 02-11-2016 |
20160041927 | Systems and Methods to Manage Cache Data Storage in Working Memory of Computing System - Systems and methods for managing records stored in a storage cache are provided. A cache index is created and maintained to track where records are stored in buckets in the storage cache. The cache index maps the memory locations of the cached records to the buckets in the cache storage and can be quickly traversed by a metadata manager to determine whether a requested record can be retrieved from the cache storage. Bucket addresses stored in the cache index include a generation number of the bucket that is used to determine whether the cached record is stale. The generation number allows a bucket manager to evict buckets in the cache without having to update the bucket addresses stored in the cache index. In an alternative embodiment, non-contiguous portions of computing system working memory are used to cache data instead of a dedicated storage cache. | 02-11-2016 |
20160062893 | INTERCONNECT AND METHOD OF MANAGING A SNOOP FILTER FOR AN INTERCONNECT - An interconnect and method of managing a snoop filter within such an interconnect are provided. The interconnect is used to connect a plurality of devices, including a plurality of master devices where one or more of the master devices has an associated cache storage. The interconnect comprises coherency control circuitry to perform coherency control operations for data access transactions received by the interconnect from the master devices. In performing those operations, the coherency control circuitry has access to snoop filter circuitry that maintains address-dependent caching indication data, and is responsive to a data access transaction specifying a target address to produce snoop control data providing an indication of which master devices have cached data for the target address in their associated cache storage. The coherency control circuitry then responds to the snoop control data by issuing a snoop transaction to each master device indicated by the snoop control data, in order to cause a snoop operation to be performed in their associated cache storage in order to generate snoop response data. Analysis circuitry then determines from the snoop response data an update condition, and upon detection of the update condition triggers performance of an update operation within the snoop filter circuitry to update the address-dependent caching indication data. By subjecting the snoop response data to such an analysis, it is possible to identify situations where the caching indication data has become out of date, and update that caching indication data accordingly, this giving rise to significant performance benefits in the operation of the interconnect. | 03-03-2016 |
20160062902 | MEMORY ACCESS PROCESSING METHOD AND INFORMATION PROCESSING DEVICE - A memory access processing method includes storing, in a cache memory, a plurality of pages stored in a main memory; storing the plurality of pages in a buffer memory, each of the plurality of pages being associated with an identifier indicating whether the each of the plurality of pages being a zero page to be zero-cleared; allocating a page to be set to a zero page, when a page fault occurs during execution of an access to the cache memory and execution of a process is stopped; updating an identifier corresponding to the allocated page to an identifier indicating the allocated page being the zero page; resuming the execution of the process; controlling an access to the cache memory, based on the identifier for each of the plurality of pages; and executing initialization of a page corresponding to the allocated page and is included in the main memory. | 03-03-2016 |
20160070646 | METHOD AND SYSTEM FOR REMOVAL OF A CACHE AGENT - A method for removal of an offlining cache agent, including: initiating an offlining of the offlining cache agent from communicating with a plurality of participating cache agents while a first transaction is in progress; setting, based on initiating the offlining, an ignore response indicator corresponding to the offlining cache agent on each of the plurality of participating cache agents; offlining, based on setting the ignore response indicator, the offlining cache agent; and ignoring, based on setting the ignore response indicator, a first response to the transaction from the offlining cache agent. | 03-10-2016 |
20160070649 | CACHE UNIT AND PROCESSOR - According to an embodiment, a cache unit includes: a first memory configured to temporarily hold data and an address of the data, a second memory configured to temporarily hold an address of particular data set in advance, and a controller configured to, when an instruction to load the data is made for a first specified address, search for a storage destination of the first specified address, output the data of the first specified address if the storage destination is the first memory, and output the particular data if the storage destination is the second memory. | 03-10-2016 |
20160077974 | ADAPTIVE CONTROL OF WRITE CACHE SIZE IN A STORAGE DEVICE - Technologies are described herein for adaptively controlling the size of a write cache in a storage device based on the time required to flush the cache. Upon receiving a write command at a controller for the storage device, an estimated cache flush time for the write cache is calculated based on the write commands contained therein. If the estimated cache flush time is greater than a maximum threshold time, the size of the write cache is decreased to control the cache flush time. If the estimated cache flush time is less than a minimum threshold time, the size of the write cache is increased to enhance random write performance. | 03-17-2016 |
20160077980 | METHODS, DEVICES AND SYSTEMS FOR CACHING DATA ITEMS - A method of storing data items may comprise receiving an uncompressed data item for storage from a client process of a plurality of client processes over a computer network; storing the uncompressed data item; acknowledging storage of the data item to the client process and receiving at least one additional uncompressed data item for storage from the client process or from another one of the plurality of client processes. The stored uncompressed data item may then be compressed and stored. Upon receiving a request for access to the data item from one of the plurality of client processes over the computer network, the compressed data item is decompressed before providing the decompressed data item to the requesting client process over the computer network. | 03-17-2016 |
20160085678 | Caching Methodology for Dynamic Semantic Tables - An apparatus includes a DOR read module that determines a degree of relatedness for a database entry stored in a concept table. The concept table is stored in cache and degree of relatedness is based on a comparison between a concept of data of the database entry and a concept of the concept table. A data usage module determines an amount of data usage for the database entry where the data usage includes an amount of usage of the database entry while in cache, a flushing rating module determines a cache flushing rating for the database entry, and a flushing module flushes the database entry from the cache in response to the cache flushing rating of the database entry being below a cache flush threshold. The cache flushing rating is determined from the degree of relatedness of the database entry and the amount of data usage of the database entry. | 03-24-2016 |
20160098352 | MEDIA CACHE CLEANING - Implementations disclosed herein provide a method comprising detecting a power supply status, determining a media cache cleaning scheme based on the detected power supply status, and performing the determined cleaning scheme until a predetermined threshold is reached. | 04-07-2016 |
20160140040 | FILTERING TRANSLATION LOOKASIDE BUFFER INVALIDATIONS - A filter includes filter entries, each corresponding to a mapping between a virtual memory address and a physical memory address and including a presence indicator indicative which processing elements have the mapping present in their respective translation lookaside buffers (TLBs). A TLB invalidation (TLBI) instruction is received for a first mapping. If a first filter entry corresponding to the first mapping exists in the filter, the plurality of processing elements are partitioned into a first partition of zero or more processing elements that have the first mapping present in their TLBs and a second partition of zero or more processing elements that do not have the first mapping present in their TLBs based on the presence indicator of the first filter entry. The TLBI instruction is sent to the processing elements included in the first partition, and not those in the second partition. | 05-19-2016 |
20160140044 | SYSTEMS AND METHODS FOR NON-BLOCKING IMPLEMENTATION OF CACHE FLUSH INSTRUCTIONS - Systems and methods for non-blocking implementation of cache flush instructions are disclosed. As a part of a method, data is accessed that is received in a write-back data holding buffer from a cache flushing operation, the data is flagged with a processor identifier and a serialization flag, and responsive to the flagging, the cache is notified that the cache flush is completed. Subsequent to the notifying, access is provided to data then present in the write-back data holding buffer to determine if data then present in the write-back data holding buffer is flagged. | 05-19-2016 |
20160140048 | CACHING TLB TRANSLATIONS USING A UNIFIED PAGE TABLE WALKER CACHE - A core executes memory instructions. A memory management unit (MMU) coupled to the core includes a first cache that stores a plurality of final mappings of a hierarchical page table, a page table walker that traverses levels of the page table to provide intermediate results associated with respective levels for determining the final mappings, and a second cache that stores a limited number of intermediate results provided by the page table walker. The MMU compares a portion of the first virtual address to portions of entries in the second cache, in response to a request from the core to invalidate a first virtual address, based on a match criterion that depends on the level associated with each intermediate result stored in an entry in the second cache, and removes any entries in the second cache that satisfy the match criterion. | 05-19-2016 |
20160147671 | SYSTEMS AND METHODS OF WRITE CACHE FLUSHING - A data storage device includes a write cache, a non-volatile memory, and a controller coupled to the write cache and to the non-volatile memory. The controller is configured to, responsive to receiving a command to flush particular data from the write cache, attempt to fill a write block of data using the particular data and pending data obtained after receipt of the command. | 05-26-2016 |
20160154743 | FLUSHING DIRTY DATA FROM CACHE MEMORY | 06-02-2016 |
20160154755 | MEMORY SWITCHING PROTOCOL WHEN SWITCHING OPTICALLY-CONNECTED MEMORY | 06-02-2016 |
20160162408 | PARALLEL DESTAGING WITH REPLICATED CACHE PINNING - Methods, apparatus and computer program products implement embodiments of the present invention that include identifying non-destaged first data in a write cache. Upon detecting second data in a master read cache, the second data is pinned to the master and one or more backup read caches. Using the first data stored in the write cache and the second data stored in the master read cache, one or more parity values are calculated, and the first data and the one or more parity values are destaged. | 06-09-2016 |
20160162412 | COMPLETION PACKET RETURN - A completion packet may be returned before a data packet is written to a memory, if a field of the data packet indicates the data packet was sent due to a cache capacity eviction. The completion packet is returned after the data packet is written to the memory, if the field indicates the data packet was sent due to a flush operation. | 06-09-2016 |
20160170670 | DATA MOVE ENGINE TO MOVE A BLOCK OF DATA | 06-16-2016 |
20160170887 | BATCHING MODIFIED BLOCKS TO THE SAME DRAM PAGE | 06-16-2016 |
20160179387 | Instruction and Logic for Managing Cumulative System Bandwidth through Dynamic Request Partitioning | 06-23-2016 |
20160179667 | INSTRUCTION AND LOGIC FOR FLUSH-ON-FAIL OPERATION | 06-23-2016 |
20160179687 | UPDATING PERSISTENT DATA IN PERSISTENT MEMORY-BASED STORAGE | 06-23-2016 |
20160203079 | FILTERING SNOOP TRAFFIC IN A MULTIPROCESSOR COMPUTING SYSTEM | 07-14-2016 |
20160253262 | SINGLETON CACHE MANAGEMENT PROTOCOL FOR HIERARCHICAL VIRTUALIZED STORAGE SYSTEMS | 09-01-2016 |
20160253265 | USING ACCESS-FREQUENCY HIERARCHY FOR SELECTION OF EVICTION DESTINATION | 09-01-2016 |
20160378658 | Hybrid Tracking of Transaction Read and Write Sets - Embodiments of the invention relate to tracking processor transactional read and write sets, thereby eliminating speculative mispredictions. Both non-speculative read set and write set indications are maintained for a transaction. The indications are stored in cache. In addition, load and write queues of addresses are maintained. The load queue of addresses relates to speculative members of a read set and the write queue of addresses relates to speculating member of a write set. For a received read request, a transaction resolution process takes place, and a resolution is performed if an address match in the write queue is detected. Similarly, for a receive write request the transaction interference additionally checks the load queue and the non-speculative read set for the pending address. | 12-29-2016 |
20160378659 | Hybrid Tracking of Transaction Read and Write Sets - Embodiments of the invention relate to tracking processor transactional read and write sets, thereby eliminating speculative mis-predictions. Both non-speculative read set and write set indications are maintained for a transaction. The indications are stored in cache. In addition, load and write queues of addresses are maintained. The load queue of addresses relates to speculative members of a read set and the write queue of addresses relates to speculating member of a write set. For a received read request, a transaction resolution process takes place, and a resolution is performed if an address match in the write queue is detected. Similarly, for a receive write request the transaction interference additionally checks the load queue and the non-speculative read set for the pending address. | 12-29-2016 |
20160378662 | Hybrid Tracking of Transaction Read and Write Sets - Embodiments of the invention relate to tracking processor transactional read and write sets, thereby eliminating speculative mis-predictions. Both non-speculative read set and write set indications are maintained for a transaction. The indications are stored in cache. In addition, load and write queues of addresses are maintained. The load queue of addresses relates to speculative members of a read set and the write queue of addresses relates to speculating member of a write set. For a received read request, a transaction resolution process takes place, and a resolution is performed if an address match in the write queue is detected. Similarly, for a receive write request the transaction interference additionally checks the load queue and the non-speculative read set for the pending address. | 12-29-2016 |
20160378663 | SYSTEM OPERATION QUEUE FOR TRANSACTION - Embodiments relate to a system operation queue for a transaction. An aspect includes determining whether a system operation is part of an in-progress transaction of a central processing unit (CPU). Another aspect includes based on determining that the system operation is part of the in-progress transaction, storing the system operation in a system operation queue corresponding to the in-progress transaction. Yet another aspect includes, based on the in-progress transaction ending, processing the system operation in the system operation queue. | 12-29-2016 |
20160378672 | HARDWARE APPARATUSES AND METHODS FOR DISTRIBUTED DURABLE AND ATOMIC TRANSACTIONS IN NON-VOLATILE MEMORY - Hardware apparatuses and methods for distributed durable and atomic transactions in non-volatile memory are described. In one embodiment, a hardware apparatus includes a hardware processor, a plurality of hardware memory controllers for each of a plurality of non-volatile data storage devices, and a plurality of staging buffers with a staging buffer for each of the plurality of hardware memory controllers, wherein each of the plurality of hardware memory controllers are to: write data of a data set that is to be written to the plurality of non-volatile data storage devices to their staging buffer, send confirmation to the hardware processor that the data is written to their staging buffer, and write the data from their staging buffer to their non-volatile data storage device on receipt of a commit command. | 12-29-2016 |
20160378673 | Hybrid Tracking of Transaction Read and Write Sets - Embodiments of the invention relate to tracking processor transactional read and write sets, thereby eliminating speculative mispredictions. Both non-speculative read set and write set indications are maintained for a transaction. The indications are stored in cache. In addition, load and write queues of addresses are maintained. The load queue of addresses relates to speculative members of a read set and the write queue of addresses relates to speculating member of a write set. For a received read request, a transaction resolution process takes place, and a resolution is performed if an address match in the write queue is detected. Similarly, for a receive write request the transaction interference additionally checks the load queue and the non-speculative read set for the pending address. | 12-29-2016 |
20160378682 | ACCESS LOG AND ADDRESS TRANSLATION LOG FOR A PROCESSOR - A processor maintains an access log indicating a stream of cache misses at a cache of the processor. In response to each of at least a subset of cache misses at the cache, the processor records a corresponding entry in the access log, indicating a physical memory address of the memory access request that resulted in the corresponding miss. In addition, the processor maintains an address translation log that indicates a mapping of physical memory addresses to virtual memory addresses. In response to an address translation (e.g., a page walk) that translates a virtual address to a physical address, the processor stores a mapping of the physical address to the corresponding virtual address at an entry of the address translation log. Software executing at the processor can use the two logs for memory management. | 12-29-2016 |
20170235674 | OPERATION OF A MULTI-SLICE PROCESSOR WITH HISTORY BUFFERS STORING TRANSACTION MEMORY STATE INFORMATION | 08-17-2017 |
20170235818 | OBJECT-BACKED BLOCK-BASED DISTRIBUTED STORAGE | 08-17-2017 |
20180024924 | ARTIFICIAL INTELLIGENCE-BASED CACHING MECHANISM | 01-25-2018 |
20190146917 | LOG-STRUCTURED STORAGE FOR DATA ACCESS | 05-16-2019 |