Entries |
Document | Title | Date |
20080244183 | Storage system - An object of the present invention is to provide a storage system which is shared by a plurality of application programs, wherein optimum performance tuning for a cache memory can be performed for each of the individual application programs. The storage system of the present invention comprises a storage device which provides a plurality of logical volumes which can be accessed from a plurality of application programs, a controller for controlling input and output of data to and from the logical volumes in response to input/output requests from the plurality of application programs, and a cache memory for temporarily storing data input to and output from the logical volume, wherein the cache memory is logically divided into a plurality of partitions which are exclusively assigned to the plurality of logical volumes respectively. | 10-02-2008 |
20080263282 | System for Caching Data - To ensure efficient access to a memory whose writing process is slow. There is provided a storage device for caching data read from a main memory and data to be written in the main memory, comprises a cache memory having a plurality of cache segments, one or more cache segments holding data matching with data in the main memory being set in a protected state to protect the cache segments from a rewrite state, an upper limit of a number of the one or more cache segments being a predetermined reference number; and a cache controller that, in accordance with a write cache miss, allocates a cache segment selected from those cache segments which are not in the protected state to cache write data and writes the write data in the selected cache segment. | 10-23-2008 |
20080270704 | CACHE ARRANGEMENT FOR IMPROVING RAID I/O OPERATIONS - The embodiments of the invention provide a method, apparatus, etc. for a cache arrangement for improving RAID I/O operations. More specifically, a method begins by partitioning a data object into a plurality of data blocks and creating one or more parity data blocks from the data object. Next, the data blocks and the parity data blocks are stored within storage nodes. Following this, the method caches data blocks within a partitioned cache, wherein the partitioned cache includes a plurality of cache partitions. The cache partitions are located within the storage nodes, wherein each cache partition is smaller than the data object. Moreover, the caching within the partitioned cache only caches data blocks in parity storage nodes, wherein the parity storage nodes comprise a parity storage field. Thus, caching within the partitioned cache avoids caching data blocks within storage nodes lacking the parity storage field. | 10-30-2008 |
20080270705 | METHOD AND APPARATUS FOR APPLICATION-SPECIFIC DYNAMIC CACHE PLACEMENT - One embodiment of the present method and apparatus for application-specific dynamic cache placement includes grouping sets of data in a cache memory system into two or more virtual partitions and processing a load/store instruction in accordance with the virtual partitions, where the load/store instruction specifies at least one of the virtual partitions to which the load/store instruction is assigned. | 10-30-2008 |
20080282036 | Method and apparatus for instant playback of a movie title - Techniques for fragmenting a file or a collection of media data are disclosed. According one aspect of the techniques, a file pertaining to a title is fragmented into a header and several tails or segments. The header is a continuous portion of the file while the segments are respective parts of the remaining portion of the file. The header is seeded substantially in all boxes, and none, one or more of the segments are distributed in each of the boxes in service. When a title is ordered, the header is instantly played back while the segments, if not locally available, are continuously fetched respectively from other boxes that have the segments. | 11-13-2008 |
20090006758 | SYSTEM BUS STRUCTURE FOR LARGE L2 CACHE ARRAY TOPOLOGY WITH DIFFERENT LATENCY DOMAINS - A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses. | 01-01-2009 |
20090006759 | SYSTEM BUS STRUCTURE FOR LARGE L2 CACHE ARRAY TOPOLOGY WITH DIFFERENT LATENCY DOMAINS - A cache memory which loads two memory values into two cache lines by receiving separate portions of a first requested memory value from a first data bus over a first time span of successive clock cycles and receiving separate portions of a second requested memory value from a second data bus over a second time span of successive clock cycles which overlaps with the first time span. In the illustrative embodiment a first input line is used for loading both a first byte array of the first cache line and a first byte array of the second cache line, a second input line is used for loading both a second byte array of the first cache line and a second byte array of the second cache line, and the transmission of the separate portions of the first and second memory values is interleaved between the first and second data busses. | 01-01-2009 |
20090024798 | Storing Data - The invention provides a method of storing data in a computing device, the method including the steps of creating a memory file system in non-pageable kernel memory of the computing device, writing data to the memory file system and transferring the written data to a pageable memory space allocated to a user process running on the computing device. An advantage of such a design is that, initially, the data of the memory based file system can be kept in the non-pageable kernel memory, minimising the need to perform context switches. However, the data can be transferred to pageable memory when necessary, such that the amount of kernel memory used by the file system can be minimised. | 01-22-2009 |
20090037660 | Time-based cache control - A time-based system and method are provided for controlling the management of cache memory. The method accepts a segment of data, and assigns a cache lock-time with a time duration to the segment. If a cache line is available, the segment is stored (in cache). The method protects the segment stored in the cache line from replacement until the expiration of the lock-time. Upon the expiration of the lock-time, the cache line is automatically made available for replacement. An available cache line is located by determining that the cache line is empty, or by determining that the cache line is available for a replacement segment. In one aspect, the cache lock-time is assigned to the segment by accessing a list with a plurality of lock-times having a corresponding plurality of time duration, and selecting from the list. In another aspect, the lock-time durations are configurable by the user. | 02-05-2009 |
20090049248 | Reducing Wiring Congestion in a Cache Subsystem Utilizing Sectored Caches with Discontiguous Addressing - A method and computer system for reducing the wiring congestion, required real estate, and access latency in a cache subsystem with a sectored and sliced lower cache by re-configuring sector-to-slice allocation and the lower cache addressing scheme. With this allocation, sectors having discontiguous addresses are placed within the same slice, and a reduced-wiring scheme is possible between two levels of lower caches based on this re-assignment of the addressable sectors within the cache slices. Additionally, the lower cache effective address tag is re-configured such that the address fields previously allocated to identifying the sector and the slice are switched relative to each other's location within the address tag. This re-allocation of the address bits enables direct slice addressing based on the indicated sector. | 02-19-2009 |
20090063775 | INSTRUMENT, A SYSTEM AND A CONTAINER FOR PROVISIONING A DEVICE FOR PERSONAL CARE TREATMENT, AND A DEVICE FOR PERSONAL CARE TREATMENT WITH SUCH A CONTAINER - The present invention provides a system and a method for a cache partitioning technique for application tasks based on the scheduling information in multiprocessors. Cache partitioning is performed dynamically based on the information of the pattern of task scheduling provided by the task scheduler ( | 03-05-2009 |
20090083489 | L2 CACHE CONTROLLER WITH SLICE DIRECTORY AND UNIFIED CACHE STRUCTURE - A cache memory logically partitions a cache array having a single access/command port into at least two slices, and uses a first directory to access the first array slice while using a second directory to access the second array slice, but accesses from the cache directories are managed using a single cache arbiter which controls the single access/command port. In one embodiment, each cache directory has its own directory arbiter to handle conflicting internal requests, and the directory arbiters communicate with the cache arbiter. The cache array is arranged with rows and columns of cache sectors wherein a cache line is spread across sectors in different rows and columns, with a portion of the given cache line being located in a first column having a first latency and another portion of the given cache line being located in a second column having a second latency greater than the first latency. | 03-26-2009 |
20090144506 | METHOD AND SYSTEM FOR IMPLEMENTING DYNAMIC REFRESH PROTOCOLS FOR DRAM BASED CACHE - A method for implementing dynamic refresh protocols for DRAM based cache includes partitioning a DRAM cache into a refreshable portion and a non-refreshable portion, and assigning incoming individual cache lines to one of the refreshable portion and the non-refreshable portion of the cache based on a usage history of the cache lines. Cache lines corresponding to data having a usage history below a defined frequency are assigned to the refreshable portion of the cache, and cache lines corresponding to data having a usage history at or above the defined frequency are assigned to the non-refreshable portion of the cache. | 06-04-2009 |
20090157969 | BUFFER CACHE MANAGEMENT TO PREVENT DEADLOCKS - A method, computer program product, and data processing system for managing a input/output buffer cache for prevention of deadlocks are disclosed. In a preferred embodiment, automatic buffer cache resizing is performed whenever the number of free buffers in the buffer cache diminishes to below a pre-defined threshold. This resizing adds a pre-defined number of additional buffers to the buffer cache, up to a pre-defined absolute maximum buffer cache size. To prevent deadlocks, an absolute minimum number of free buffers are reserved to ensure that sufficient free buffers for performing a buffer cache resize are always available. In the event that the buffer cache becomes congested and cannot be resized further, threads whose buffer demands cannot be immediately satisfied are blocked until sufficient free buffers become available. | 06-18-2009 |
20090164730 | Method,apparatus,and system for shared cache usage to different partitions in a socket with sub-socket partitioning - A cache that supports sub-socket partitioning is discussed. Specifically, the cache supports different quality of service levels and victim cache line selection for a cache miss operation. The different quality of service levels allow for programmable ceiling usage and floor usage thresholds that allow for different techniques for victim cache line selection. | 06-25-2009 |
20090198901 | COMPUTER SYSTEM AND METHOD FOR CONTROLLING THE SAME - A computer system includes a main memory for storing a large amount of data, a cache memory that can be accessed at a higher speed than the main memory, a memory replacement controller for controlling the replacement of data between the main memory and the cache memory, and a memory controller capable of allocating one or more divided portions of the cache memory to each process unit. The memory replacement controller stores priority information for each process unit, and replaces lines of the cache memory based on a replacement algorithm taking the priority information into consideration, wherein the divided portions of the cache memory are allocated so that the storage area is partially shared between process units, after which the allocated amounts of cache memory are changed automatically. | 08-06-2009 |
20090240891 | METHOD, SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DATA BUFFERS PARTITIONED FROM A CACHE ARRAY - Systems, methods and computer program products for data buffers partitioned from a cache array. An exemplary embodiment includes a method in a processor and for providing data buffers partitioned from a cache array, the method including clearing cache directories associated with the processor to an initial state, obtaining a selected directory state from a control register preloaded by the service processor, in response to the control register including the desired cache state, sending load commands with an address and data, loading cache lines and cache line directory entries into the cache and storing the specified data in the corresponding cache line. | 09-24-2009 |
20090248986 | Apparatus for and Method of Implementing Multiple Content Based Data Caches - A novel and useful mechanism enabling the partitioning of a normally shared L1 data cache into several different independent caches, wherein each cache is dedicated to a specific data type. To further optimize performance each individual L1 data cache is placed in relative close physical proximity to its associated register files and functional unit. By implementing separate independent L1 data caches, the content based data cache mechanism of the present invention increases the total size of the L1 data cache without increasing the time necessary to access data in the cache. Data compression and bus compaction techniques that are specific to a certain format can be applied each individual cache with greater efficiency since the data in each cache is of a uniform type. | 10-01-2009 |
20090271573 | PARTITIONED MANAGEMENT DATA CACHE - A system and method for decreasing system management data access time. A system includes a device, a cache memory coupled to the device, and a cache memory refresh controller. The device provides system management information. The cache memory stores system management information. The system management information stored in the cache is partitioned into a first portion and a second portion. The cache refresh program refreshes the system management information stored in the cache memory. The first portion is refreshed after expiration of a predetermined refresh time interval. The second portion is refreshed when the second portion is accessed. | 10-29-2009 |
20090313436 | CACHE REGIONS - A cache region can be created in a cache in response to receiving a cache region creation request from an application. A storage request from the application can identify the cache region and one or more objects to be stored in the cache region. Those objects can be stored in the cache region in response to receiving the storage request. | 12-17-2009 |
20090313437 | METHOD AND SYSTEM OF OPTIMAL CACHE PARTITIONING IN IPTV NETWORKS - In an IPTV network, one or more caches may be provided at the network nodes for storing video content in order to reduce bandwidth requirements. Cache functions such as cache effectiveness and cacheability may be defined and optimized to determine optimal partitioning of cache memory for caching the unicast services of the IPTV network. | 12-17-2009 |
20100017568 | Cache Used Both as Cache and Staging Buffer - In one embodiment, a cache comprises a data memory comprising a plurality of data entries, each data entry having capacity to store a cache block of data, and a cache control unit coupled to the data memory. The cache control unit is configured to dynamically allocate a given data entry in the data memory to store a cache block being cached or to store data that is not being cache but is being staged for retransmission on an interface to which the cache is coupled. | 01-21-2010 |
20100030971 | CACHE SYSTEM, CACHE SYSTEM CONTROL METHOD, AND INFORMATION PROCESSING APPARATUS - To provide a cache system that can dynamically change a cache capacity by memory areas divided into plural. The cache system includes a line counter that counts the number of effective lines for each memory area. The effective line is a cache line in which effective cache data is stored. Cache data to be invalidated at the time of changing the cache capacity is selected based on the number of effective lines counted by the line counter. | 02-04-2010 |
20100070715 | APPARATUS, SYSTEM AND METHOD FOR STORAGE CACHE DEDUPLICATION - An apparatus, system, and method are disclosed for deduplicating storage cache data. A storage cache partition table has at least one entry associating a specified storage address range with one or more specified storage partitions. A deduplication module creates an entry in the storage cache partition table wherein the specified storage partitions contain identical data to one another within the specified storage address range thus requiring only one copy of the identical data to be cached in a storage cache. A read module accepts a storage address within a storage partition of a storage subsystem, to locate an entry wherein the specified storage address range contains the storage address, and to determine whether the storage partition is among the one or more specified storage partitions if such an entry is found. | 03-18-2010 |
20100082906 | Apparatus and method for low touch cache management - In some embodiments, a processor-based system includes a processor, a system memory coupled to the processor, a mass storage device, a cache memory located between the system memory and the mass storage device, and code stored on the processor-based system to cause the processor-based system to utilize the cache memory. The code may be configured to cause the processor-based system to preferentially use only a selected size of the cache memory to store cache entries having less than or equal to a selected number of cache hits. Other embodiments are disclosed and claimed. | 04-01-2010 |
20100131715 | Updating Data within a Business Planning Tool - A apparatus is provided for updating data within a business planning tool. The apparatus comprises a computer memory ( | 05-27-2010 |
20100191915 | SYSTEM AND COMPUTER PROGRAM PRODUCT FOR DYNAMIC QUEUE SPLITTING FOR MAXIMIZING THROUGHPUT OF QUEUE BASED OPERATIONS WHILE MAINTAINING PER-DESTINATION ORDER OF OPERATIONS - A system for providing dynamic queue splitting to maximize throughput of queue entry processing while maintaining the order of queued operations on a per-destination basis. Multiple queues are dynamically created by splitting heavily loaded queues in two. As queues become dormant, they are re-combined. Queue splitting is initiated in response to a trigger condition, such as a queue exceeding a threshold length. When multiple queues are used, the queue in which to place a given operation is determined based on the destination for that operation. Each queue in the queue tree created by the disclosed system can store entries containing operations for multiple destinations, but the operations for a given destination are all always stored within the same queue. The queue into which an operation is to be stored may be determined as a function of the name of the operation destination. | 07-29-2010 |
20100235580 | Multi-Domain Management of a Cache in a Processor System - A system and method are provided for managing cache memory in a computer system. A cache controller portions a cache memory into a plurality of partitions, where each partition includes a plurality of physical cache addresses. Then, the method accepts a memory access message from the processor. The memory access message includes an address in physical memory and a domain identification (ID). A determination is made if the address in physical memory is cacheable. If cacheable, the domain ID is cross-referenced to a cache partition identified by partition bits. An index is derived from the physical memory address, and a partition index is created by combining the partition bits with the index. A processor is granted access (read or write) to an address in cache defined by partition index. | 09-16-2010 |
20100268889 | COMPILER BASED CACHE ALLOCATION - Techniques a generally described for creating a compiler determined map for the allocation of memory space within a cache. An example computing system is disclosed having a multicore processor with a plurality of processor cores. At least one cache may be accessible to at least two of the plurality of processor cores. A compiler determined map may separately allocate a memory space to threads of execution processed by the processor cores. | 10-21-2010 |
20100293334 | LOCATION UPDATES FOR A DISTRIBUTED DATA STORE - Version indicators within an existing range can be associated with a data partition in a distributed data store. A partition reconfiguration can be associated with one of multiple partitions in the data store, and a new version indicator that is outside the existing range can be assigned to the reconfigured partition. Additionally, a broadcast message can be sent to multiple nodes, which can include storage nodes and/or client nodes that are configured to communicate with storage nodes to access data in a distributed data store. The broadcast message can include updated location information for data in the data store. In addition, a response message can be sent to a requesting node of the multiple nodes in response to receiving from that node a message that requests updated location information for the data. The response message can include the requested updated location information. | 11-18-2010 |
20100332761 | Reconfigurable Cache - A mechanism is provided for providing an improved reconfigurable cache. The mechanism partitions a large cache into inclusive cache regions with equal-ratio size or other coarse size increase. The cache controller includes an address decoder for the large cache with a large routing structure. The cache controller includes an additional address decoder for the small cache with a smaller routing structure. The additional address decoder for the small cache reduces decode, array access, and data return latencies. When only a small cache is actively in use, the rest of the cache can be turned into low-power mode to save power. | 12-30-2010 |
20100332762 | DIRECTORY CACHE ALLOCATION BASED ON SNOOP RESPONSE INFORMATION - Methods and apparatus relating to directory cache allocation that is based on snoop response information are described. In one embodiment, an entry in a directory cache may be allocated for an address in response to a determination that another caching agent has a copy of the data corresponding to the address. Other embodiments are also disclosed. | 12-30-2010 |
20110010504 | Combined Transparent/Non-Transparent Cache - In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion. | 01-13-2011 |
20110087843 | Monitoring cache usage in a distributed shared cache - An apparatus, method, and system are disclosed. In one embodiment the apparatus includes a cache memory, which a number of sets. Each of the sets in the cache memory have several cache lines. The apparatus also includes at least one process resource table. The process resource table maintains a cache line occupancy count of a number of cache lines. Specifically, the cache line occupancy count for each cache line describes the number of cache lines in the cache storing information utilized by a process running on a computer system. Additionally, the process resource table stores the occupancy count of less cache lines than the total number of cache lines in the cache memory. | 04-14-2011 |
20110107033 | METHOD AND APPARATUS FOR PROVIDING AN APPLICATION-LEVEL CACHE - An approach is provided for providing an application-level cache. A caching application configures at least one memory of a mobile terminal into an application-level cache with a locked region and a floating region. The caching application then causes, at least in part, actions that result in caching, into each of the locked region and the floating region, of data items that are anticipated to be requested via an application of the mobile terminal. | 05-05-2011 |
20110131378 | Managing Access to a Cache Memory - Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area. | 06-02-2011 |
20110145504 | Independently Controllable And Reconfigurable Virtual Memory Devices In Memory Modules That Are Pin-compatible With Standard Memory Modules - Various embodiments of the present invention are directed multi-core memory modules. In one embodiment, a memory module ( | 06-16-2011 |
20110185127 | PROCESSOR CIRCUIT WITH SHARED MEMORY AND BUFFER SYSTEM - The processor circuit ( | 07-28-2011 |
20110314226 | SEMICONDUCTOR STORAGE DEVICE BASED CACHE MANAGER - In general, the present invention relates to semiconductor storage systems (SSDs). Specifically, the present invention relates to SSD based cache manager. In a typical embodiment, a cache balancer is coupled to a set of cache meta data units. A set of cache algorithms that utilizes the set of cache meta data units to determine optimal data caching operations. A cache adaptation manger is coupled to and sends volume information to the cache balancer. Typically, this information is computed using the set of cache algorithms. A monitoring manager is coupled to the cache adaptation. | 12-22-2011 |
20120042131 | Flexible use of extended cache using a partition cache footprint - An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core. | 02-16-2012 |
20120151147 | SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - Systems and methods for managing destage conflicts in cache are provided. One system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank. Also provided are physical computer storage mediums including code that, when executed by a processor, cause the processor to perform the above method. | 06-14-2012 |
20120151148 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Systems and methods for background destaging storage tracks from cache when one or more hosts are idle are provided. One system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method. | 06-14-2012 |
20120166730 | CIRCUIT TO EFFICIENTLY HANDLE DATA MOVEMENT WITHIN A CACHE CONTROLLER OR ON-CHIP MEMORY PERIPHERAL - The present invention is directed to a circuit for managing data movement between an interface supporting the PLB6 bus protocol, an interface supporting the AMBA AXI bus protocol, and internal data arrays of a cache controller and/or on-chip memory peripheral. The circuit implements register file buffers for gathering data to bridge differences between the bus protocols and bus widths in a manner which addresses latency and performance concerns of the overall system. | 06-28-2012 |
20120191917 | Managing Access to a Cache Memory - Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area. | 07-26-2012 |
20120198172 | Cache Partitioning in Virtualized Environments - A mechanism is provided in a virtual machine monitor for providing cache partitioning in virtualized environments. The mechanism assigns a virtual identification (ID) to each virtual machine in the virtualized environment. The processing core stores the virtual ID of the virtual machine in a special register. The mechanism also creates an entry for the virtual machine in a partition table. The mechanism may partition a shared cache using a vertical (way) partition and/or a horizontal partition. The entry in the partition table includes a vertical partition control and a horizontal partition control. For each cache access, the virtual machine passes the virtual ID along with the address to the shared cache. If the cache access results in a miss, the shared cache uses the partition table to select a victim cache line for replacement. | 08-02-2012 |
20120210070 | NON-BLOCKING DATA MOVE DESIGN - A mechanism for data buffering is provided. A portion of a cache is allocated as buffer regions, and another portion of the cache is designated as random access memory (RAM). One of the buffer regions is assigned to a processor. A data block is stored to the one of the buffer regions of the cache according an instruction of the processor. The data block is stored from the one of the buffer regions of the cache to the memory. | 08-16-2012 |
20120226869 | FILE SERVER APPARATUS, MANAGEMENT METHOD OF STORAGE SYSTEM, AND PROGRAM - When a storage capacity of a file server is expanded using an online storage service, elimination of an upper-limit constraint of the file size as a constraint of the online storage service and reduction in the communication cost are realized. A kernel module including logical volumes on the online storage service divides a file into block files at a fixed length and stores and manages the block files to prevent the upper-limit constraint of the file size. When a READ/WRITE request is generated for a mounted file system, only necessary block files are downloaded and used from the online storage service based on an offset value and size information to optimize the communication and realize the communication cost reduction. | 09-06-2012 |
20120254544 | SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - A system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank. | 10-04-2012 |
20120254545 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - A system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. | 10-04-2012 |
20120272008 | STORAGE SYSTEM AND ITS DATA PROCESSING METHOD - A cache memory is utilized effectively because data redundancy elimination is executed. | 10-25-2012 |
20120278556 | STORAGE APPARATUS AND CACHE CONTROL METHOD - Optimizing cache-resident area where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. An LU is a logical space resident in cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size. When the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory. | 11-01-2012 |
20120278557 | Combined Transparent/Non-Transparent Cache - In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion. | 11-01-2012 |
20130007370 | METHOD AND APPARATUS FOR MINIMIZING WORKING MEMORY CONTENTIONS IN COMPUTING SYSTEMS - Implementations of the present disclosure involve an apparatus and/or method for allocating, dividing and accessing memory of a multi-threaded computing system based at least in part on the structural hierarchy of the components of the computing system. Allocating partitions of memory based on the hierarchy structure of the computing system may isolate the threads of the computing system such that cache-memory contention by a plurality of executing threads may be reduced. In general, the apparatus and/or method may analyze the hierarchal structure of the components of the computing system utilized in the execution of applications and divide the available memory of the system between the various components. This division of the system memory creates exclusive partitions in the caches of the computing system based on the processor and cache hierarchy. The partitions may be used by different applications or by different sections of the same application to store accessed memory in cache for quick retrieval. | 01-03-2013 |
20130024621 | MEMORY-CENTERED COMMUNICATION APPARATUS IN A COARSE GRAINED RECONFIGURABLE ARRAY - The present invention relates to a coarse-grained reconfigurable array, comprising: at least one processor; a processing element array including a plurality of processing elements, and a configuration cache where commands being executed by the processing elements are saved; and a plurality of memory units forming a one-to-one mapping with the processor and the processing element array. The coarse-grained reconfigurable array further comprises a central memory performing data communications between the processor and the processing element array by switching the one-to-one mapping such that when the processor transfers data from/to a main memory to/from a frame buffer, a significant bottleneck phenomenon that may occur due to the limited bandwidth and latency of a system bus can be improved. | 01-24-2013 |
20130031310 | COMPUTER SYSTEM - A computer system includes: a main storage unit, a processing executing unit sequentially executing processing to be executed on virtual processors; a level-1 cache memory shared among the virtual processors; a level-2 cache memory including storage areas partitioned based on the number of the virtual processors, the storage areas each (i) corresponding to one of the virtual processors and (ii) holding the data to be used by the corresponding one of the virtual processors; a context memory holding a context item corresponding to the virtual processor; a virtual processor control unit saving and restoring a context item of one of the virtual processors; a level-1 cache control unit; and a level-2 cache control unit. | 01-31-2013 |
20130097387 | MEMORY-BASED APPARATUS AND METHOD - Aspects of various embodiments are directed to memory circuits, such as cache memory circuits. In accordance with one or more embodiments, cache-access to data blocks in memory is controlled as follows. In response to a cache miss for a data block having an associated address on a memory access path, data is fetched for storage in the cache (and serving the request), while one or more additional lookups are executed to identify candidate locations to store data. An existing set of data is moved from a target location in the cache to one of the candidate locations, and the address of the one of the candidate locations is associated with the existing set of data. Data in this candidate location may, for example, thus be evicted. The fetched data is stored in the target location and the address of the target location is associated with the fetched data. | 04-18-2013 |
20130138889 | Cache Optimization Via Predictive Cache Size Modification - Systems and methods for cache optimization, the method comprising monitoring cache access rate for one or more cache tenants in a computing environment, wherein a first cache tenant is allocated a first cache having a first cache size which may be adjusted; determining a cache profile for at least the first cache over one or more time intervals according to data collected during the monitoring, analyzing the cache profile for the first cache to determine an expected cache usage model for the first cache; and analyzing the cache usage model and factors related to cache efficiency for the one or more cache tenants to dictate one or more constraints that define boundaries for the first cache size. | 05-30-2013 |
20130138890 | METHOD AND APPARATUS FOR PERFORMING DYNAMIC CONFIGURATION - A method for performing dynamic configuration includes: freezing a bus between a dynamic configurable cache and a plurality of cores/processors by rejecting a request from any of the cores/processors during a bus freeze period, wherein the dynamic configurable cache is implemented with an on-chip memory; and adjusting a size of a portion of the dynamic configurable cache, wherein the portion of the dynamic configurable cache is capable of caching/storing information for one of the cores/processors. An associated apparatus is also provided. In particular, the apparatus includes the plurality of cores/processors, the dynamic configurable cache, and a dynamic configurable cache controller, and can operate according to the method. | 05-30-2013 |
20130166847 | INFORMATION PROCESSING APPARATUS AND CACHE CONTROL METHOD - According to one embodiment, an apparatus includes a storage module, a cache module, and a changing module. The cache module is configured to use a first cache data storage region in a storage region of a first storage device as a cache of the storage module, and to manage cache management information includes position information indicating a position of the first cache data storage region. The changing module is configured to store cache data stored in the first cache data storage region in a second cache data storage region in a storage region of a second storage device when it is requested to use the second cache data storage region as the cache of the storage module, and to update the position information. | 06-27-2013 |
20130246710 | STORAGE SYSTEM AND DATA MANAGEMENT METHOD - A storage system is provided with a plurality of physical storage devices, a cache memory, a control device that is coupled to the plurality of physical storage devices and the cache memory, and a buffer part. The buffer part is a storage region that is formed by using at least a part of a storage region of the plurality of physical storage devices and that is configured to temporarily store at least one target data element that is to be transmitted to a predetermined target. The control device stores a target data element into a cache region that has been allocated to a buffer region (that is a part of the cache memory and that is a storage region of a write destination of the target data element for the buffer part). The control device transmits the target data element from the cache memory. In the case in which a new target data element is generated, the control device executes a control in such a manner that the new target data element has a high tendency to be stored for a buffer region in which the transmitted target data element has been stored and to which a cache region has been allocated. | 09-19-2013 |
20130304994 | Per Thread Cacheline Allocation Mechanism in Shared Partitioned Caches in Multi-Threaded Processors - Systems and methods for allocation of cache lines in a shared partitioned cache of a multi-threaded processor. A memory management unit is configured to determine attributes associated with an address for a cache entry associated with a processing thread to be allocated in the cache. A configuration register is configured to store cache allocation information based on the determined attributes. A partitioning register is configured to store partitioning information for partitioning the cache into two or more portions. The cache entry is allocated into one of the portions of the cache based on the configuration register and the partitioning register. | 11-14-2013 |
20130332676 | CACHE AND MEMORY ALLOCATION FOR VIRTUAL MACHINES - In a cloud computing environment, a cache and a memory are partitioned into “colors”. The colors of the cache and the memory are allocated to virtual machines independently of one another. In order to provide cache isolation while allocating the memory and cache in different proportions, some of the colors of the memory are allocated to a virtual machine, but the virtual machine is not permitted to directly access these colors. Instead, when a request is received from the virtual machine for a memory page in one of the non-accessible colors, a hypervisor swaps the requested memory page with a memory page with a color that the virtual machine is permitted to access. The virtual machine is then permitted to access the requested memory page at the new color location. | 12-12-2013 |
20140006715 | SUB-NUMA CLUSTERING | 01-02-2014 |
20140082290 | Enhanced Wiring Structure for a Cache Supporting Auxiliary Data Output - A mechanism is provided in a data processing system for enhancing wiring structure for a cache supporting an auxiliary data output. The mechanism splits the data cache into a first data portion and a second data portion. The first data portion provides a first set of data elements and the second data portion provides a second set of data elements. The mechanism connects a first data path to provide the first set of data elements to a primary output and connects a second data path to provide the second set of data elements to the primary output. The mechanism feeds the first data path back into the second data path and feeds the second data path back into the first data path. The mechanism connects a secondary output to the second data path. | 03-20-2014 |
20140156936 | SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - Destaging storage tracks from each rank that includes a greater than a predetermined percentage of a predetermined amount of storage space with respect to a current amount of storage space allocated to each rank until the current amount of storage space used by each respective rank is equal to the predetermined percentage of the predetermined amount of storage space. The destage storage tracks are declined from being destaged from each rank that includes less than or equal to the predetermined percentage of the predetermined amount of storage space rank. | 06-05-2014 |
20140156937 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. The storage tracks are refrained from being destaged from the write cache if the at least one host is not idle. Each rank is monitored for write operations from the at least one host, and a determination is made if the at least one host is idle with respect to each respective rank based on monitoring each rank for write operations from the at least one host such that the at least one host may be determined to be idle with respect to a first rank and not idle with respect to a second rank. | 06-05-2014 |
20140173211 | Partitioning Caches for Sub-Entities in Computing Devices - Some embodiments include a partitioning mechanism that partitions a cache memory into sub-partitions for sub-entities. In the described embodiments, the cache memory is initially partitioned into two or more partitions for one or more corresponding entities. During a partitioning operation, the partitioning mechanism is configured to partition one or more of the partitions in the cache memory into two or more sub-partitions for one or more sub-entities of a corresponding entity. A cache controller then uses a corresponding sub-partition for memory accesses by the one or more sub-entities. | 06-19-2014 |
20140173212 | ACQUIRING REMOTE SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for acquiring remote shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer determining that a first thread of a first task requires shared resource data stored in a memory partition corresponding to a second thread of a second task. Embodiments also include the runtime optimizer requesting from the second thread, in response to determining that the first thread of the first task requires the shared resource data, SVD information associated with the shared resource data. Embodiments also include the runtime optimizer receiving from the second thread, the SVD information associated with the shared resource data. | 06-19-2014 |
20140258628 | SYSTEM, METHOD AND COMPUTER-READABLE MEDIUM FOR MANAGING A CACHE STORE TO ACHIEVE IMPROVED CACHE RAMP-UP ACROSS SYSTEM REBOOTS - A cache controller having a cache store and associated with a storage system maintains information stored in the cache store across a reboot of the cache controller. The cache controller communicates with a host computer system and a data storage system. The cache controller partitions the cache memory to include a metadata portion and log portion. A separate portion is used for cached data elements. The cache controller maintains a copy of the metadata in a separate memory accessible to the host computer system. Data is written to the cache store when the metadata log reaches its capacity. Upon a reboot, metadata is copied back to the host computer system and the metadata log is traversed to copy additional changes in the cache that have not been saved to the data storage system. | 09-11-2014 |
20140281249 | HASH-BASED SPATIAL SAMPLING FOR EFFICIENT CACHE UTILITY CURVE ESTIMATION AND CACHE ALLOCATION - Cache utility curves are determined for different software entities depending on how frequently their storage access requests lead to cache hits or cache misses. Although possible, not all access requests need be tested, but rather only a subset, determined by whether a hash value of each current storage location identifier (such as an address or block number) meets one or more sampling criteria. | 09-18-2014 |
20140310473 | STORAGE I/O PATH PARTITIONING TO ELIMINATE I/O INTERFERENCE IN CONSOLIDATED SERVERS - A method for storage input/output (I/O) path configuration in a system that includes at least one storage device in network communication with at least one computer processor; the method comprising providing in the I/O path into at least: (a) a block-based kernel-level filesystem, (b) an I/O cache module controlling an I/O cache implemented on a first computer readable medium, (c) a journaling module, and (d) a storage cache module controlling a storage cache implemented on a second computer readable medium, the second computer readable medium having a lower read/write speed than the first computer readable medium. Furthermore, the steps of translating by the filesystem, based on computer executable instructions executed by the at least one processor, a file I/O request made by an application executed by the at least one computer processor into a block I/O request and fulfilling by the at least one processor the block I/O request from one of the I/O cache and the storage cache complete the I/O operation. | 10-16-2014 |
20140359224 | DYNAMIC CACHE ALLOCATION IN A SOLID STATE DRIVE ENVIRONMENT - An invention is provided for dynamic cache allocation in a solid state drive environment. The invention includes partitioning a cache memory into a reserved partition and a caching partition, wherein the reserved partition begins at a beginning of the cache memory and the caching partition begins after an end of the reserved partition. Data is cached starting at a beginning of the caching partition. Then, when the caching partition is fully utilized, data is cached the reserved partition. After receiving an indication of a power state change, such as when entering a sleep power state, marking data is written to the reserve partition. The marking data is examined after resuming the normal power state to determine whether a deep sleep power state was entered. When returning from a deep sleep power state, the beginning address of valid cache data within the reserve partition is determined after resuming a normal power state. | 12-04-2014 |
20140365730 | METHOD AND APPARATUS FOR PERFORMING DYNAMIC CONFIGURATION - A method for performing dynamic configuration includes: freezing a bus between a portion of a dynamic configurable cache and at least one of a plurality of cores/processors by pending a request from the at least one of the cores/processors to the portion of the dynamic configurable cache during a bus freeze period, wherein the plurality of cores/processors are allowed to access the dynamic configurable cache and the at least one of the plurality of cores/processors is allowed to access the portion of the dynamic configurable cache; and adjusting a size of the portion of the dynamic configurable cache, wherein the portion of the dynamic configurable cache is capable of caching/storing information for the at least one of the plurality of cores/processors. An associated apparatus is also provided. In particular, the apparatus includes the plurality of cores/processors, the dynamic configurable cache, and a dynamic configurable cache controller, and can operate according to the method. | 12-11-2014 |
20140372702 | HANDLING MEMORY PRESSURE IN AN IN-DATABASE SHARDED QUEUE - Handling memory pressure in an in-database sharded queue is described. Messages from a plurality of enqueuers are stored in a plurality of shards of a sharded queue. Messages from a first enqueuer are stored in a first shard. A queue table corresponding to the sharded queue is maintained. In volatile memory, a plurality of message caches is maintained, each message cache corresponding to a shard of the plurality of shards. Memory pressure is detected based on memory usage of the volatile memory. To store a specific message from the enqueuer, the specific message is stored in rows of the queue table that are assigned to the first shard. When memory pressure is not detected, the specific message is stored in a first message cache corresponding to the first shard. Subscribers of the sharded queue are caused to dequeue messages from the plurality of shards. | 12-18-2014 |
20150019815 | UTILIZING GLOBAL DIGESTS CACHING IN DATA DEDUPLICATION OF WORKLOADS - For utilizing a global digests cache in data deduplication of difficult workloads in a data deduplication system using a processor device in a computing environment, input data is partitioned into data chunks and digest values are calculated for each of the data chunks. A search for similar data in a repository of data is preformed. Input digests of the input data are matched with repository digests contained in the global digests cache for finding data matches, if the search for the similar repository data in the repository of data fails to find the similar repository data. | 01-15-2015 |
20150019816 | UTILIZING GLOBAL DIGESTS CACHING IN SIMILARITY BASED DATA DEDUPLICATION - Input data is partitioned into data chunks and digest values are calculated for each of the data chunks. The positions of similar repository data are found in a repository of data for each of the data chunks. The repository digests of the similar repository data are located and loaded into the global digests cache. The global digests cache contains digests previously loaded by other deduplication processes. The input digests of the input data are matched with the repository digests contained in the global digests cache for locating data matches. The processor prefers to match the input digests of the input data with the repository digests contained in the global digests cache which are of the similar repository data, rather than repository digests which are of other repository data that was not determined as similar to the input data chunks. | 01-15-2015 |
20150019817 | TUNING GLOBAL DIGESTS CACHING IN A DATA DEDUPLICATION SYSTEM - Input data is partitioned into data chunks and digest values are calculated for each of the data chunks. The positions of similar repository data are found in a repository of data for each of the data chunks. The repository digests of the similar repository data are located and loaded into the global digests cache. The global digests cache contains digests previously loaded by other deduplication processes. The input digests of the input data are matched with the repository digests contained in the global digests cache for locating data matches. A sample of the repository digests is loaded into a search mechanism within the global digests cache. | 01-15-2015 |
20150019818 | Maintaining Cache Size Proportional to Power Pack Charge - The present disclosure is directed to a method for managing a cache based on a charge of a power source. The method includes the step of determining a charge of the power source at a first time instance. The method also includes the step of designating for write back cache an amount of data in the cache which can be offloaded from the cache based on the charge of the power source at the first time instance. The method also includes the step of designating as write through cache an amount of data remaining in the cache which was not designated as write back cache. | 01-15-2015 |
20150032965 | COMPUTER SYSTEM, CACHE MANAGEMENT METHOD, AND COMPUTER - A computer system, comprising a server on which an application runs, and a storage system that stores data to be used by the application, the cache driver being configured to change, in a case of the condition of a cache area is a first cache condition in which data is readable from a cache area and writing of data into the cache area is prohibited, the condition of the cache area to a third cache condition in which reading of data from the cache area is prohibited and writing of data into the cache area is prohibited, from the first cache condition. | 01-29-2015 |
20150039833 | Management of caches - A system and method for efficiently powering down banks in a cache memory for reducing power consumption. A computing system includes a cache array and a corresponding cache controller. The cache array includes multiple banks, each comprising multiple cache sets. In response to a request to power down a first bank of the multiple banks in the cache array, the cache controller selects a cache line of a given type in the first bank and determines whether a respective locality of reference for the selected cache line exceeds a threshold. If the threshold is exceeded, then the selected cache line is migrated to a second bank in the cache array. If the threshold is not exceeded, then the selected cache line is written back to lower-level memory. | 02-05-2015 |
20150046656 | MANAGING AND SHARING STORAGE CACHE RESOURCES IN A CLUSTER ENVIRONMENT - Systems and methods are provided for managing storage cache resources among all servers within the cluster storage environment. A method includes partitioning a main cache of a corresponding node into a global cache and a local cache, sharing each global cache of each node with other ones of the nodes of the multiple nodes, and dynamically adjusting a ratio of an amount of space of the main cache making up the global cache and an amount of space of the main cache making up the local cache, based on access latency and cache hit over a predetermined period of time of each of the global cache and the local cache. | 02-12-2015 |
20150046657 | SYSTEM AND METHOD FOR MANAGING CORRESPONDENCE BETWEEN A CACHE MEMORY AND A MAIN MEMORY - A system for managing correspondence between a cache memory, subdivided into a plurality of cache areas, and a main memory, subdivided into a plurality of memory areas, includes: a mechanism allocating, to each area of the main memory, at least one area of the cache memory; a mechanism temporarily assigning, to any data row stored in one of the areas of the main memory, a cache row included only in one cache area allocated to the main memory area wherein the data row is stored; and a mechanism generating and updating settings of the allocation by activating the allocation mechanism, the temporary assigning mechanism configured to determine a cache row to be assigned to a data row based on the allocation settings. | 02-12-2015 |
20150067262 | THREAD CACHE ALLOCATION - Systems and techniques are described for thread cache allocation. A described technique includes monitoring input and output accesses for a plurality of threads executing on a computing device that includes a cache comprising a quantity of memory blocks, determining a respective reuse intensity for each of the threads, determining a respective read ratio for each of the threads, determining a respective quantity of memory blocks for each of the partitions by optimizing a combination of cache utilities, each cache utility being based on the respective reuse intensity, the respective read ratio, and a respective hit ratio for a particular partition, and resizing one or more of the partitions to be equal to the respective quantity of the memory blocks for the partition. | 03-05-2015 |
20150089144 | METHOD AND SYSTEM FOR AUTOMATIC SPACE ORGANIZATION IN TIER2 SOLID STATE DRIVE (SSD) CACHE IN DATABASES FOR MULTI PAGE SUPPORT - A system and method for adjusting space allocated for different page sizes on a recording medium includes dividing the recording medium into multiple blocks such that a block size of the multiple blocks supports a largest page size, and such that each of the multiple blocks is used for a single page size, and assigning an incoming page to a block based on a temperature of the incoming page. | 03-26-2015 |
20150095579 | APPARATUS AND METHOD FOR EFFICIENT HANDLING OF CRITICAL CHUNKS - An apparatus and method for efficient handling of critical chunks. For example, one embodiment of an apparatus comprises a plurality of agents to perform a respective plurality of data processing functions, at least one of the data processing functions comprising transmitting and receiving chunks of data to and from a memory controller, respectively; a system agent to coordinate requests for transmitting and receiving the chunks of data to and from the memory controller, the system agent comprising: a memory for temporarily storing the chunks of data during transmission between the agents and the memory controller; and scheduling logic to prioritize critical chunks over non-critical chunks across multiple outstanding requests while ensuring that the non-critical chunks do not result in starvation. | 04-02-2015 |
20150121012 | METHOD AND APPARATUS FOR PROVIDING DEDICATED ENTRIES IN A CONTENT ADDRESSABLE MEMORY TO FACILITATE REAL-TIME CLIENTS - A device and method for partitioning a cache that is expected to operate with at least two classes of clients (such as real-time clients and non-real-time clients). A first portion of the cache is dedicated to real-time clients such that non-real-time clients are prevented from utilizing said first portion. | 04-30-2015 |
20150149727 | WRITE AND READ COLLISION AVOIDANCE IN SINGLE PORT MEMORY DEVICES - A method of avoiding a write collision in single port memory devices from two independent write operations is described. A first data object from a first write operation is divided into a first even sub-data object and first odd sub-data object. A second data object from a second write operation is divided into a second even sub-data object and a second odd sub-data object. The first even sub-data object is stored to a first single port memory device and the second odd sub-data object to a second single port memory device when the first write operation and the second write operation occur at the same time. The second even sub-data object is stored to the first single port memory device and the first odd sub-data object to the second single port memory device when the first write operation and the second write operation occur at the same time. | 05-28-2015 |
20150301956 | DATA STORAGE SYSTEM WITH CACHING USING APPLICATION FIELD TO CARRY DATA BLOCK PROTECTION INFORMATION - In a data storage system in which a host system transfers data to a data storage controller having cache memory, the data storage controller can use a designated field of each of several cache data blocks, such as an application (APP) field, to contain protection information from fields of a host data block, such as the guard (GRD) and reference (REF) fields as well as the APP field. | 10-22-2015 |
20150309740 | DATA DECOMPRESSION USING A CONSTRUCTION AREA - For serving sequential read patterns from a compressed journal storage system, a construction area cache algorithm is used to temporarily store the read and decompressed data in a user view sequential order to minimize disk I/Os and CPU utilization while serving the data to the user. | 10-29-2015 |
20150378914 | IMPLEMENTING ADVANCED CACHING - Embodiments are disclosed for implementing a priority queue in a storage device, e.g., a solid state drive. At least some of the embodiments can use an in-memory set of blocks to store items until the block is full, and commit the full block to the storage device. Upon storing a full block, a block having a lowest priority can be deleted. An index storing correspondences between items and blocks can be used to update priorities and indicated deleted items. By using the in-memory blocks and index, operations transmitted to the storage device can be reduced. | 12-31-2015 |
20150378937 | SYSTEMS AND METHODS FOR LOCKING CACHED STORAGE - The present disclosure relates to systems and methods for locking a storage device to prevent inadvertent modification when the device is mounted on a different system or different host. The method can include selecting, on a storage device, a location and contents of a byte region for locking, where the byte region comprises a boot sector of the device. The method can also include encoding the selected contents of the byte region, and locking the device by replacing the contents of the identified byte region with the encoded byte region at the identified location on the device. In some embodiments, encoding the selected contents of the byte region can include inverting the contents of the selected byte region using a binary not operation. In some embodiments, encoding the selected contents of the byte region can include modifying the selected contents of the byte region based on a generated unique identifier. | 12-31-2015 |
20160019156 | DISK CACHE ALLOCATION - Implementations disclosed herein provide a method comprising segregating a disk cache into a plurality of allocation units, and allocating the plurality of allocation units out-of-order. | 01-21-2016 |
20160019158 | Method And Apparatus For A Shared Cache With Dynamic Partitioning - Aspects include computing devices, systems, and methods for dynamically partitioning a system cache by sets and ways into component caches. A system cache memory controller may manage the component caches and manage access to the component caches. The system cache memory controller may receive system cache access requests and reserve locations in the system cache corresponding to the component caches correlated with component cache identifiers of the requests. Reserving locations in the system cache may activate the locations in the system cache for use by a requesting client, and may also prevent other client from using the reserved locations in the system cache. Releasing the locations in the system cache may deactivate the locations in the system cache and allow other clients to use them. A client reserving locations in the system cache may change the amount of locations it has reserved within its component cache. | 01-21-2016 |
20160034398 | CACHE-COHERENT MULTIPROCESSOR SYSTEM AND A METHOD FOR DETECTING FAILURES IN A CACHE-COHERENT MULTIPROCESSOR SYSTEM - A cache-coherent multiprocessor system comprising processing units, a shared memory resource accessible by the processing units, the shared memory resource being divided into at least one shared region, at least one first region, and at least one second region, a first cache, a second cache, a coherency unit, and a monitor unit, wherein the monitor unit is adapted to generate an error signal, when the coherency unit affects the at least one first region due to a memory access from the second processing unit and/or when the coherency unit affects the at least one second region due to a memory access from the first processing unit, and a method for detecting failures in a such a cache-coherent multiprocessor system. | 02-04-2016 |
20160085676 | Managing Access to a Cache Memory - Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area. | 03-24-2016 |
20160085679 | Managing Access to a Cache Memory - Managing access to a cache memory includes dividing said cache memory into multiple of cache areas, each cache area having multiple entries; and providing at least one separate lock attribute for each cache area such that only a processor thread having possession of the lock attribute corresponding to a particular cache area can update that cache area. | 03-24-2016 |
20160103764 | METHODS AND SYSTEMS FOR CACHE MANAGEMENT IN STORAGE SYSTEMS - Methods and systems for managing caching mechanisms in storage systems are provided where a global cache management function manages multiple independent cache pools and a global cache pool. As an example, the method includes: splitting a cache storage into a plurality of independently operating cache pools, each cache pool comprising storage space for storing a plurality of cache blocks for storing data related to an input/output (“I/O”) request and metadata associated with each cache pool; receiving the I/O request for writing a data; operating a hash function on the I/O request to assign the I/O request to one of the plurality of cache pools; and writing the data of the I/O request to one or more of the cache blocks associated with the assigned cache pool. In an aspect, this allows efficient I/O processing across multiple processors simultaneously. | 04-14-2016 |
20160117251 | MANAGING METHOD FOR CACHE MEMORY OF SOLID STATE DRIVE - A managing method for a cache memory of a solid state drive includes the following steps. When the solid state drive decides to perform a garbage collection, a storing space of the cache memory is divided into plural storing portions according to at least one of the command type of an access command, access data size of the access command and the drive free space. A first storing portion of the cache memory is set as a buffering unit for a garbage collecting purpose. A second storing portion of the cache memory is set as a buffering unit for a writing purpose. | 04-28-2016 |
20160124861 | CACHE MEMORY AND METHOD FOR ACCESSING CACHE MEMORY - A cache memory is equipped with a cache memory area, a conversion information storing unit, and a conversion circuit. In the cache memory area, a plurality of sets are divided into a plurality of sectors. The conversion information storing unit stores, for each of the plurality of sectors, conversion information for converting a relative set index in a sector into a set index in the cache memory area. The conversion circuit converts the relative set index in the sector indicated by the sector identification information to a set index that indicates a set accessed by the processor in the cache memory area, using sector identification information that identifies an access-target sector and the conversion information stored in the conversion information storing unit. | 05-05-2016 |
20160140052 | System and Method for Efficient Cache Utility Curve Construction and Cache Allocation - Interaction is evaluated between a computer system cache and at least one entity that submits a stream of references corresponding to location identifiers of data storage locations. The reference stream is spatially sampled by comparing a hash value of each reference with a threshold value and selecting only those references whose hash value meets a selection criterion. Cache utility values are then compiled for those references. In some embodiments, the compiled cache values may then be corrected for accuracy as a function of statistics of those location identifiers over the entire stream of references and of the sampled references whose hash values satisfied the selection criterion. Alternatively, a plurality of caching configurations is selected and the selected references are applied as inputs to a plurality of caching simulations, each corresponding to a different caching configuration. A resulting set of cache utility values is then computed for each caching simulation. | 05-19-2016 |
20160147664 | DYNAMIC PARTIAL BLOCKING OF A CACHE ECC BYPASS - An aspect includes receiving a fetch request for a data block at a cache memory system that includes cache memory that is partitioned into a plurality of cache data ways including a cache data way that contains the data block. The data block is fetched and it is determined whether the in-line ECC checking and correcting should be bypassed. The determining is based on a bypass indicator corresponding to the cache data way. Based on determining that in-line ECC checking and correcting should be bypassed, returning the fetched data block to the requestor and performing an ECC process for the fetched data block subsequent to returning the fetched data block to the requestor. Based on determining that in-line ECC checking and correcting should not be bypassed, performing the ECC process for the fetched data block and returning the fetched data block to the requestor subsequent to performing the ECC process. | 05-26-2016 |
20160179677 | RESOLVING MEMORY ACCESSES CROSSING CACHE LINE BOUNDARIES | 06-23-2016 |
20160378665 | TECHNOLOGIES FOR PREDICTIVE FILE CACHING AND SYNCHRONIZATION - Technologies for predictive caching include a computing device to receive sensor data generated by one or more sensors of the computing device and determine a device context of the computing device based on the sensor data. Based on the device context, the computing device determines a file to cache that has similar characteristics to another file recently accessed by a user of the computing device. The computing device includes a file cache with a first partition to store files identified to have similar characteristics to files recently accessed by a user and a second partition to store files identified based on access patterns of the user. The computing device stores the determined file to the first partition. | 12-29-2016 |