Patent application number | Description | Published |
20100318744 | DIFFERENTIAL CACHING MECHANISM BASED ON MEDIA I/O SPEED - A method for allocating space in a cache based on media I/O speed is disclosed herein. In certain embodiments, such a method may include storing, in a read cache, cache entries associated with faster-responding storage devices and cache entries associated with slower-responding storage devices. The method may further include implementing an eviction policy in the read cache. This eviction policy may include demoting, from the read cache, the cache entries of faster-responding storage devices faster than the cache entries of slower-responding storage devices, all other variables being equal. In certain embodiments, the eviction policy may further include demoting, from the read cache, cache entries having a lower read-hit ratio faster than cache entries having a higher read-hit ratio, all other variables being equal. A corresponding computer program product and apparatus are also disclosed and claimed herein. | 12-16-2010 |
20100325356 | NONVOLATILE STORAGE THRESHOLDING - Embodiments for facilitating data transfer between a nonvolatile storage (NVS) write cache and a pool of target storage devices are provided. Each target storage device in the pool of target storage devices is determined as one of a hard disk drive (HDD) and a solid-state drive (SSD) device, and classified into one of a SSD rank group and a HDD rank group. If no data is received in the NVS write cache for a predetermined time to be written to a target storage device classified in the SSD rank group, a threshold of available space in the NVS write cache is set to allocate at least a majority of the available space to the HDD rank group. Upon receipt of a write request for the SSD rank group, the threshold of the available space is reduced to allocate a greater portion of the available space to the SSD rank group. | 12-23-2010 |
20110087837 | SECONDARY CACHE FOR WRITE ACCUMULATION AND COALESCING - A method for efficiently using a large secondary cache is disclosed herein. In certain embodiments, such a method may include accumulating, in a secondary cache, a plurality of data tracks. These data tracks may include modified data and/or unmodified data. The method may determine if a subset of the plurality of data tracks makes up a full stride. In the event the subset makes up a full stride, the method may destage the subset from the secondary cache. By destaging full strides, the method reduces the number of disk operations that are required to destage data from the secondary cache. A corresponding computer program product and apparatus are also disclosed and claimed herein. | 04-14-2011 |
20110191534 | DYNAMIC MANAGEMENT OF DESTAGE TASKS IN A STORAGE CONTROLLER - Method, system, and computer program product embodiments for facilitating data transfer from a write cache and NVS via a device adapter to a pool of storage devices by a processor or processors are provided. The processor(s) adaptively varies the destage rate based on the current occupancy of the NVS for a particular storage device and stage activity related to that storage device. The stage activity includes one or more of the storage device stage activity, device adapter stage activity, device adapter utilized bandwidth and the read/write speed of the storage device. These factors are generally associated with read response time in the event of a cache miss and not ordinarily associated with dynamic management of the destage rate. This combination maintains the desired overall occupancy of the NVS while improving response time performance. | 08-04-2011 |
20110196987 | COMPRESSION ON THIN PROVISIONED VOLUMES USING EXTENT BASED MAPPING - Method, system, and computer program product embodiments for facilitating data compression are provided. A set of logical extents, each having compressed logical tracks of data, is mapped to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent. Pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, the write operation is mapped to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent. | 08-11-2011 |
20110202708 | Integrating A Flash Cache Into Large Storage Systems - An I/O enclosure module is provided with one or more I/O enclosures having a plurality of slots for receiving electronic devices. A host adapter is connected a first slot of the I/O enclosure module and is configured to connect a host to the I/O enclosure. A device adapter is connected to a second slot of the I/O enclosure module and is configured to connect a storage device to the I/O enclosure module. A flash cache is connected to a third slot of the I/O enclosure module and includes a flash-based memory configured to cache data associated with data requests handled through the I/O enclosure module. A primary processor complex manages data requests handled through the I/O enclosure module by communicating with the host adapter, device adapter, and flash cache to manage to the data requests. | 08-18-2011 |
20120079187 | MANAGEMENT OF WRITE CACHE USING STRIDE OBJECTS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, identifying working data on a stride basis by a processor device are provided. A multi-update bit is established for each stride in a modified cache. The multi-update bit is adapted to indicate at least one track in a working set. A schedule of destage scans is configured based on a plurality of levels of urgency. A destage operation is performed based on at least one of a number of strides examined by the destage scans, whether the multi-update bit is set, and whether an emergency level of the plurality of levels of urgency is active. | 03-29-2012 |
20120079199 | INTELLIGENT WRITE CACHING FOR SEQUENTIAL TRACKS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, write caching for sequential tracks by a processor device are provided. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation. | 03-29-2012 |
20120089795 | MULTIPLE INCREMENTAL VIRTUAL COPIES - Provided are techniques for, in response to establishing each incremental virtual copy from a source to a target, creating a target change recording structure for the target. While performing destage to a source data block at the source, it is determined that there is at least one incremental virtual copy target for this source data block. For each incremental virtual copy relationship where the source data block is newer than the incremental virtual copy relationship and an indicator is set in a target inheritance structure on the target for a corresponding target data block, the source data block is copied to each corresponding target data block, and an indicator is set in each target change recording structure on each target for the target data block corresponding to the source data block being destaged. | 04-12-2012 |
20120151140 | SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - Systems and methods for destaging storage tracks from cache are provided. One system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count. Also provided are physical computer storage mediums including a computer program product for performing the above method. | 06-14-2012 |
20120151147 | SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - Systems and methods for managing destage conflicts in cache are provided. One system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank. Also provided are physical computer storage mediums including code that, when executed by a processor, cause the processor to perform the above method. | 06-14-2012 |
20120151148 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Systems and methods for background destaging storage tracks from cache when one or more hosts are idle are provided. One system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method. | 06-14-2012 |
20120151151 | SYSTEMS AND METHODS FOR MANAGING CACHE DESTAGE SCAN TIMES - Systems and methods for managing destage scan times in a cache are provided. One system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. Physical computer storage mediums including a computer program product for performing the above method are also provided. | 06-14-2012 |
20120185648 | STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary method, system, and computer program embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap. | 07-19-2012 |
20120191904 | SECONDARY CACHE FOR WRITE ACCUMULATION AND COALESCING - A method for efficiently using a large secondary cache is disclosed herein. In certain embodiments, such a method may include accumulating, in a secondary cache, a plurality of data tracks. These data tracks may include modified data and/or unmodified data. The method may determine if a subset of the plurality of data tracks makes up a full stride. In the event the subset makes up a full stride, the method may destage the subset from the secondary cache. By destaging full strides, the method reduces the number of disk operations that are required to destage data from the secondary cache. A corresponding computer program product and apparatus are also disclosed herein. | 07-26-2012 |
20120198148 | ADAPTIVE PRESTAGING IN A STORAGE CONTROLLER - In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Still further, a cache prestaging operation in accordance with further aspects may decrease one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from prestaged tracks being demoted before they are used. Conversely, a cache prestaging operation in accordance with another aspect may increase one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from waiting for a stage to complete. In yet another aspect, the prestage trigger may not be limited by the prestage amount. Instead, the pre-stage trigger may be permitted to expand as conditions warrant it by prestaging additional tracks and thereby effectively increasing the potential range for the prestage trigger. Other features and aspects may be realized, depending upon the particular application. | 08-02-2012 |
20120198150 | ASSIGNING DEVICE ADAPTORS AND BACKGROUND TASKS TO USE TO COPY SOURCE EXTENTS TO TARGET EXTENTS IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for assigning device adaptors and background tasks to use to copy source extents to target extents in a copy relationship. A relation is provided of a plurality of source extents in source ranks to copy to a plurality of target extents in target ranks in the storage system. One target rank in the relation is used to determine an order in which the target ranks in the relation are selected to register for copying. For each selected target rank in the relation selected according to the determined order, an iteration of a registration operation is performed to register the selected target rank and a selected source rank copied to the selected target rank in the relation. The registration operation comprises indicating in a device adaptor assignment data structure a source device adaptor and target device adaptor to use to copy the selected rank to the selected target rank and adding an entry to a priority queue for the relation for the selected target rank. The selected source rank is copied to the selected target rank using as the source and target device adaptors indicated in the device adaptor assignment data structure for the selected target rank in response to processing the entry in the priority queue added to the priority queue for the selected target rank. | 08-02-2012 |
20120203983 | COMPRESSION ON THIN PROVISIONED VOLUMES USING EXTENT BASED MAPPING - A set of logical extents, each having compressed logical tracks of data, is mapped to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent. Pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, the write operation is mapped to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent. | 08-09-2012 |
20120216009 | SOURCE-TARGET RELATIONS MAPPING - A data preservation function is provided which, in one embodiment, includes mapping in a plurality of maps for a target storage device, map extent ranges of each map, to corresponding target extent ranges of storage locations on the target storage device. Usage of a particular map extent range by a relationship between a source extent range of storage locations on a source storage device containing data to be preserved in the source extent range, and the target extent range mapped to the map particular extent range, may be indicated by the map. In another aspect, in response to receipt of a data preservation command, a data preservation operation is performed including determining whether a map indicates availability of a map extent range mapped to the identified target extent range. Upon determining that a particular map indicates availability of a map extent range mapped to the identified target extent range, a relationship between the identified source extent range and the identified target extent range is established. In yet another aspect, upon determining that no map indicates availability of a map extent range mapped to the identified target extent range, establishing of a relationship between the identified source extent range and the identified target extent range may be delayed until it is determined that a particular map indicates availability of a map extent range mapped to the identified target extent range. Other features and aspects may be realized, depending upon the particular application. | 08-23-2012 |
20120221823 | MULTIPLE INCREMENTAL VIRTUAL COPIES - Provided are techniques for, in response to establishing each incremental virtual copy from a source to a target, creating a target change recording structure for the target. While performing destage to a source data block at the source, it is determined that there is at least one incremental virtual copy target for this source data block. For each incremental virtual copy relationship where the source data block is newer than the incremental virtual copy relationship and an indicator is set in a target inheritance structure on the target for a corresponding target data block, the source data block is copied to each corresponding target data block, and an indicator is set in each target change recording structure on each target for the target data block corresponding to the source data block being destaged. | 08-30-2012 |
20120233121 | DELETING RELATIONS BETWEEN SOURCES AND SPACE-EFFICIENT TARGETS IN MULTI-TARGET ARCHITECTURES - A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple space-efficient (SE) targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A space-efficient (SE) target associated with the relation is then identified. A mapping structure maps data in logical tracks of the SE target to physical tracks of a repository. The method then identifies a sibling SE target that inherits data from the SE target. Once the SE target and the sibling SE target are identified, the method modifies the mapping structure to map the data in the physical tracks of the repository to the logical tracks of the sibling SE target. The relation is then deleted between the source and the SE target. A corresponding computer program product is also described herein. | 09-13-2012 |
20120233136 | DELETING RELATIONS BETWEEN SOURCES AND SPACE-EFFICIENT TARGETS IN MULTI-TARGET ARCHITECTURES - A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple space-efficient (SE) targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A space-efficient (SE) target associated with the relation is then identified. A mapping structure maps data in logical tracks of the SE target to physical tracks of a repository. The method then identifies a sibling SE target that inherits data from the SE target. Once the SE target and the sibling SE target are identified, the method modifies the mapping structure to map the data in the physical tracks of the repository to the logical tracks of the sibling SE target. The relation is then deleted between the source and the SE target. | 09-13-2012 |
20120233404 | DELETING RELATIONS IN MULTI-TARGET, POINT-IN-TIME-COPY ARCHITECTURES WITH DATA DEDUPLICATION - A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A target associated with the relation is then identified. The method then identifies a sibling target that inherits data from the target. Once the target and the sibling target are identified, the method copies the data from the target to the sibling target. The relation between the source and the target is then deleted. A corresponding computer program product is also disclosed and claimed herein. | 09-13-2012 |
20120233408 | INTELLIGENT WRITE CACHING FOR SEQUENTIAL TRACKS - Write caching for sequential tracks is performed by a processor device in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation. | 09-13-2012 |
20120233416 | MULTI-TARGET, POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a source volume in a multi-target architecture is described. The multi-target architecture includes a source volume and multiple target volumes mapped thereto. In one embodiment, such a method includes copying data in a track of the source volume to a corresponding track of a target volume (target x). The method enables one or more sibling target volumes (siblings) mapped to the source volume to inherit the data from the target x. When the data is successfully copied to the target x, the method performs a write to the track of the source volume. Other methods for reading and writing data to volumes in the multi-target architecture are also described. | 09-13-2012 |
20120233421 | CYCLIC POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cyclic point-in-time-copy architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume. The method then determines whether the target bit maps (TBMs) of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Once the HS volume is found, the method determines whether the HS volume and the child volume are the same volume. If the HS volume and the child volume are not the same volume, the method copies the data from the HS volume to the child volume. The method then performs the write to the volume x. | 09-13-2012 |
20120233429 | CASCADED, POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cascaded architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume, wherein each of the volume x and the child volume have a target bit map (TBM) associated therewith. The method then determines whether the TBMs of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Finding the HS volume includes travelling up the cascaded architecture until the source of the data is found. Once the HS volume is found, the method copies the data from the HS volume to the child volume and performs the write to the volume x. A method for performing a read is also disclosed herein. | 09-13-2012 |
20120233430 | CYCLIC POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cyclic point-in-time-copy architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume. The method then determines whether the target bit maps (TBMs) of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Once the HS volume is found, the method determines whether the HS volume and the child volume are the same volume. If the HS volume and the child volume are not the same volume, the method copies the data from the HS volume to the child volume. The method then performs the write to the volume x. A corresponding computer program product is also described. | 09-13-2012 |
20120254122 | NEAR CONTINUOUS SPACE-EFFICIENT DATA PROTECTION - A method for providing rolling continuous data protection of source data is disclosed. In one embodiment, such a method includes enabling a user to select source data and establish a first interval when point-in-time copies of the source data are generated. The method further enables the user to specify a first number of point-in-time copies to retain at the first interval. The method further enables the user to specify a second number of point-in-time copies to retain at a second interval, wherein the second interval is a (n≧2) multiple of the first interval. The method further enables the user to specify a third number of point-in-time copies to retain at a third interval, wherein the third interval is a (n≧2) multiple of the second interval. A corresponding apparatus and computer program product are also disclosed. | 10-04-2012 |
20120254539 | SYSTEMS AND METHODS FOR MANAGING CACHE DESTAGE SCAN TIMES - A system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. | 10-04-2012 |
20120254544 | SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - A system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank. | 10-04-2012 |
20120254545 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - A system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. | 10-04-2012 |
20120254547 | MANAGING METADATA FOR DATA IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for managing metadata for data in a copy relationship copied from a source storage to a target storage. Information is maintained on a copy relationship of source data in the source storage and target data in the target storage. The source data is copied from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship. Target metadata is generated for the target data comprising the source data copied to the cache. An access request to requested target data comprising the target data in the cache is processed and access is provided to the requested target data in the cache. A determination is made as to whether the requested target data in the cache has been destaged to the target storage. The target metadata for the requested target data in the target storage is discarded in response to determining that the requested target data in the cache has not been destaged to the target storage. | 10-04-2012 |
20120260043 | FABRICATING KEY FIELDS - Exemplary methods, computer systems, and computer program products for fabricating key fields by a processor device in a computer environment are provided. In one embodiment, the computer environment is configured for, as an alternative to reading Count-Key-Data (CKD) data in order to change the key field, providing a hint to fabricate a new key field, thereby overwriting a previous key field and updating the CKD data. | 10-11-2012 |
20120260044 | SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - A system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count. | 10-11-2012 |
20120265766 | COMPRESSION ON THIN PROVISIONED VOLUMES USING EXTENT BASED MAPPING - For facilitating data compression, a set of logical extents, each having compressed logical tracks of data, is mapped to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent. Pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, the write operation is mapped to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent. | 10-18-2012 |
20120265933 | STRIDE BASED FREE SPACE MANAGEMENT ON COMPRESSED VOLUMES - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks, and a determination is made as to whether all of the one or more tracks can be stored in one selected stride of the plurality of strides. In response to determining that all of the one or more tracks can be stored in the one selected stride, the one or more tracks are written in the one selected stride of the plurality of strides. | 10-18-2012 |
20120265934 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 10-18-2012 |
20120300329 | MAGNETIC DISK DRIVE USING A NON-VOLATILE STORAGE DEVICE AS CACHE FOR MODIFIED TRACKS - Provided are a computer program product, system, and method for a magnetic disk drive. The disk drive has at least one disk platter having at least one recordable disk surface having an areal density of at least 200 gigabits per square inch. Either a diameter of the at least one disk platter is greater than 3.5 inches or the at least one disk platter rotates at less than 5400 RPMs. A read/write head reads and writes tracks of data with respect to the at least one disk surface. Modified tracks from write requests to write to the at least one disk surface on the at least one disk platter are cached in a non-volatile storage device for caching modified tracks. Modified tracks are cached in the non-volatile storage device to later destage to the at least one disk surface. | 11-29-2012 |
20120300336 | MAGNETIC DISK DRIVE USING A NON-VOLATILE STORAGE DEVICE AS CACHE FOR MODIFIED TRACKS - Provided are a computer program product, system, and method for a magnetic disk drive. The disk drive has at least one disk platter having at least one recordable disk surface having an areal density of at least 200 gigabits per square inch. Either a diameter of the at least one disk platter is greater than 3.5 inches or the at least one disk platter rotates at less than 5400 RPMs. A read/write head reads and writes tracks of data with respect to the at least one disk surface. Modified tracks from write requests to write to the at least one disk surface on the at least one disk platter are cached in a non-volatile storage device for caching modified tracks. Modified tracks are cached in the non-volatile storage device to later destage to the at least one disk surface. | 11-29-2012 |
20120303861 | POPULATING STRIDES OF TRACKS TO DEMOTE FROM A FIRST CACHE TO A SECOND CACHE - Provided are a computer program product, system, and method for populating strides of tracks to demote from a first cache to a second cache. A first cache maintains modified and unmodified tracks from a storage system subject to Input/Output (I/O) requests. A determination is made to demote tracks from the first cache. A determination is made as to whether there are enough tracks ready to demote to form a stride, wherein tracks are written to a second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration. A stride is populated with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride. The stride of tracks, to demote from the first cache, are promoted to the second cache. The tracks in the second cache that are modified are destaged to the storage system. | 11-29-2012 |
20120303862 | CACHING DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE - Provided are a computer program product, system, and method for caching data in a storage system having multiple caches. A sequential access storage device includes a sequential access storage medium and a non-volatile storage device integrated in the sequential access storage device, received modified tracks are cached in the non-volatile storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A spatial index indicates the modified tracks in the non-volatile storage device in an ordering based on their physical location in the sequential access storage medium. The modified tracks are destaged from the non-volatile storage device by comparing a current position of a write head to physical locations of the modified tracks on the sequential access storage medium indicated in the spatial index to select a modified track to destage from the non-volatile storage device to the storage device. | 11-29-2012 |
20120303863 | USING AN ATTRIBUTE OF A WRITE REQUEST TO DETERMINE WHERE TO CACHE DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE - Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition. | 11-29-2012 |
20120303864 | CACHE MANAGEMENT OF TRACKS IN A FIRST CACHE AND A SECOND CACHE FOR A STORAGE - Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device. | 11-29-2012 |
20120303869 | HANDLING HIGH PRIROITY REQUESTS IN A SEQUENTIAL ACCESS STORAGE DEVICE HAVING A NON-VOLATILE STORAGE CACHE - Modified tracks for write requests to a sequential access storage medium in a sequential access storage device are cached in a non-volatile storage, which is a faster access device than the sequential access storage medium. A request queue includes destage requests to destage the modified tracks in the non-volatile storage device to the sequential access storage medium and read requests to access read requested tracks from the sequential access storage medium. A comparison is made of a current position of a read/write mechanism with respect to physical locations on the sequential access storage medium of the tracks subject to the destage requests indicated in the request queue. A determination is made of one of the destage requests to process based on the comparison. The modified track for the determined destage request is written from the non-volatile storage device to the sequential access storage medium. | 11-29-2012 |
20120303872 | CACHE MANAGEMENT OF TRACKS IN A FIRST CACHE AND A SECOND CACHE FOR A STORAGE - Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device. | 11-29-2012 |
20120303875 | POPULATING STRIDES OF TRACKS TO DEMOTE FROM A FIRST CACHE TO A SECOND CACHE - Provided are a computer program product, system, and method for populating strides of tracks to demote from a first cache to a second cache. A first cache maintains modified and unmodified tracks from a storage system subject to Input/Output (I/O) requests. A determination is made to demote tracks from the first cache. A determination is made as to whether there are enough tracks ready to demote to form a stride, wherein tracks are written to a second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration. A stride is populated with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride. The stride of tracks, to demote from the first cache, are promoted to the second cache. The tracks in the second cache that are modified are destaged to the storage system. | 11-29-2012 |
20120303876 | CACHING DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE - Provided are a computer program product, system, and method for caching data in a storage system having multiple caches. A sequential access storage device includes a sequential access storage medium and a non-volatile storage device integrated in the sequential access storage device, received modified tracks are cached in the non-volatile storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A spatial index indicates the modified tracks in the non-volatile storage device in an ordering based on their physical location in the sequential access storage medium. The modified tracks are destaged from the non-volatile storage device by comparing a current position of a write head to physical locations of the modified tracks on the sequential access storage medium indicated in the spatial index to select a modified track to destage from the non-volatile storage device to the storage device. | 11-29-2012 |
20120303877 | USING AN ATTRIBUTE OF A WRITE REQUEST TO DETERMINE WHERE TO CACHE DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE - Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition. | 11-29-2012 |
20120303888 | DESTAGING OF WRITE AHEAD DATA SET TRACKS - Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). | 11-29-2012 |
20120303895 | HANDLING HIGH PRIORITY REQUESTS IN A SEQUENTIAL ACCESS STORAGE DEVICE HAVING A NON-VOLATILE STORAGE CACHE - Provided are a computer program product, system, and method for handling high priority requests in a sequential access storage device. Received modified tracks for write requests are cached in a non-volatile storage device integrated with the sequential access storage device. A destage request is added to a request queue for a received write request having modified tracks for the sequential access storage medium cached in the non-volatile storage device. A read request indicting a priority is received. A determination is made of a priority of the read request as having a first priority or a second priority. The read request is added to the request queue in response to determining that the determined priority is the first priority. The read request is processed at a higher priority than the read and destage requests in the request queue in response to determining that the determined priority is the second priority. | 11-29-2012 |
20120303898 | MANAGING UNMODIFIED TRACKS MAINTAINED IN BOTH A FIRST CACHE AND A SECOND CACHE - Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache. | 11-29-2012 |
20120303899 | MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES - Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode. | 11-29-2012 |
20120303904 | MANAGING UNMODIFIED TRACKS MAINTAINED IN BOTH A FIRST CACHE AND A SECOND CACHE - Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache. | 11-29-2012 |
20120324171 | Apparatus and Method to Copy Data - An apparatus and method for copying data are disclosed. A data track to be replicated using a peer-to-peer remote copy (PPRC) operation is identified. The data track is encoded in a non-transitory computer readable medium disposed in a first data storage system. At a first time, a determination of whether the data track is stored in a data cache is made. At a second time, the data track is replicated to a non-transitory computer readable medium disposed in a second data storage system. The second time is later than the first time. If the data track was stored in the data cache at the first time, a cache manager is instructed to not demote the data track from the data cache. If the data track was not stored in the data cache at the first time, the cache manager is instructed that the data track may be demoted. | 12-20-2012 |
20120324173 | EFFICIENT DISCARD SCANS - Exemplary method, system, and computer program product embodiments for performing a discard scan operation are provided. In one embodiment, by way of example only, a plurality of tracks is examined for meeting criteria for a discard scan. In lieu of waiting for a completion of a track access operation, at least one of the plurality of tracks is marked for demotion. An additional discard scan may be subsequently performed for tracks not previously demoted. The discard and additional discard scans may proceed in two phases. Additional system and computer program product embodiments are disclosed and provide related advantages. | 12-20-2012 |
20130007372 | MANAGEMENT OF WRITE CACHE USING STRIDE OBJECTS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, identifying working data on a stride basis by a processor device are provided. A multi-update bit is established for each of a plurality of strides in a modified cache, wherein the multi-update bit is adapted to indicate a corresponding stride is part of at least one track in a working set that refers to a group of frequently updated tracks. The plurality of strides are scanned based on a schedule to identify tracks for destaging. An operation to destage is performed on a selected track identified during the scanning, if the multi-update bit of a selected stride on the selected track is set to indicate the selected track is part of the working set and if the NVS is about 90% full or greater. | 01-03-2013 |
20130024613 | PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS - Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks. | 01-24-2013 |
20130024624 | PREFETCHING TRACKS USING MULTIPLE CACHES - Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request. | 01-24-2013 |
20130024625 | PREFETCHING TRACKS USING MULTIPLE CACHES - Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request. | 01-24-2013 |
20130024626 | PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage. | 01-24-2013 |
20130024627 | PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS - Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks. | 01-24-2013 |
20130024628 | EFFICIENT TRACK DESTAGE IN SECONDARY STORAGE - Exemplary method, system, and computer program product embodiments for efficient track destage in secondary storage in a more effective manner, are provided. In one embodiment, by way of example only, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. Additional system and computer program product embodiments are disclosed and provide related advantages. | 01-24-2013 |
20130031295 | ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. | 01-31-2013 |
20130031297 | ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. | 01-31-2013 |
20130080704 | MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided. | 03-28-2013 |
20130111106 | PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE | 05-02-2013 |
20130111131 | DYNAMICALLY ADJUSTED THRESHOLD FOR POPULATION OF SECONDARY CACHE | 05-02-2013 |
20130111133 | DYNAMICALLY ADJUSTED THRESHOLD FOR POPULATION OF SECONDARY CACHE | 05-02-2013 |
20130111134 | MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS | 05-02-2013 |
20130111146 | SELECTIVE POPULATION OF SECONDARY CACHE EMPLOYING HEAT METRICS | 05-02-2013 |
20130111160 | SELECTIVE SPACE RECLAMATION OF DATA STORAGE MEMORY EMPLOYING HEAT AND RELOCATION METRICS | 05-02-2013 |
20130124803 | PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage. | 05-16-2013 |
20130132664 | PERIODIC DESTAGES FROM INSIDE AND OUTSIDE DIAMETERS OF DISKS TO IMPROVE READ RESPONSE TIMES - A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied. | 05-23-2013 |
20130132667 | ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS - A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied. | 05-23-2013 |
20130145100 | MANAGING METADATA FOR DATA IN A COPY RELATIONSHIP - Provided is a method for managing metadata for data in a copy relationship copied from a source storage to a target storage. Information is maintained on a copy relationship of source data in the source storage and target data in the target storage. The source data is copied from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship. Target metadata is generated for the target data comprising the source data copied to the cache. An access request to requested target data comprising the target data in the cache is processed and access is provided to the requested target data in the cache. The target metadata for the requested target data in the target storage is discarded in response to determining that the requested target data in the cache has not been destaged to the target storage. | 06-06-2013 |
20130166837 | DESTAGING OF WRITE AHEAD DATA SET TRACKS - Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X). | 06-27-2013 |
20130166844 | STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap. | 06-27-2013 |
20130173878 | SOURCE-TARGET RELATIONS MAPPING - A data preservation function is provided which, in one embodiment, includes indicating by a map, usage of a particular map extent range by a relationship between a source extent range of storage locations on a source storage device containing data to be preserved in the source extent range, and a target extent range mapped to the map particular extent range. In another aspect, in response to receipt of a data preservation command, a data preservation operation is performed including determining whether a map indicates availability of a map extent range mapped to the identified target extent range. Upon determining that a particular map indicates availability of a map extent range mapped to the identified target extent range, a relationship between the identified source extent range and the identified target extent range is established. Other features and aspects may be realized, depending upon the particular application. | 07-04-2013 |
20130185476 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING AN OCCUPANCY OF VALID TRACKS IN STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides. | 07-18-2013 |
20130185478 | POPULATING A FIRST STRIDE OF TRACKS FROM A FIRST CACHE TO WRITE TO A SECOND STRIDE IN A SECOND CACHE - Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted. | 07-18-2013 |
20130185489 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride. | 07-18-2013 |
20130185493 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 07-18-2013 |
20130185494 | POPULATING A FIRST STRIDE OF TRACKS FROM A FIRST CACHE TO WRITE TO A SECOND STRIDE IN A SECOND CACHE - Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted. | 07-18-2013 |
20130185495 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride. | 07-18-2013 |
20130185497 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 07-18-2013 |
20130185501 | CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided are a computer program product, system, and method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted. | 07-18-2013 |
20130185502 | DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE - A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. | 07-18-2013 |
20130185504 | DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE - A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache. | 07-18-2013 |
20130185507 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 07-18-2013 |
20130185510 | CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided is a method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted. | 07-18-2013 |
20130185512 | MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS - For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes. | 07-18-2013 |
20130185513 | CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE - In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application. | 07-18-2013 |
20130185514 | CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE - In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application. | 07-18-2013 |
20130191596 | ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS - A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied. | 07-25-2013 |
20130198461 | MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES - Provided is a method for managing track discard requests. A backup copy of a track in a cache is maintained in a cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. If a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent to the cache backup device indicating the tracks indicated in the queued predetermined number of track discard requests to instruct the cache backup device to discard the tracks indicated in the discard multiple tracks message. If a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode. | 08-01-2013 |
20130198751 | INCREASED DESTAGING EFFICIENCY - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating. | 08-01-2013 |
20130198752 | INCREASED DESTAGING EFFICIENCY - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating. | 08-01-2013 |
20130205077 | PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE - For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. | 08-08-2013 |
20130205084 | STRIDE BASED FREE SPACE MANAGEMENT ON COMPRESSED VOLUMES - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks, and a determination is made as to whether all of the one or more tracks can be stored in one selected stride of the plurality of strides. In response to determining that all of the one or more tracks can be stored in the one selected stride, the one or more tracks are written in the one selected stride of the plurality of strides. | 08-08-2013 |
20130205088 | MULTI-STAGE CACHE DIRECTORY AND VARIABLE CACHE-LINE SIZE FOR TIERED STORAGE ARCHITECTURES - A method in accordance with the invention includes providing first, second, and third storage tiers, wherein the first storage tier acts as a cache for the second storage tier, and the second storage tier acts as a cache for the third storage tier. The first storage tier uses a first cache line size corresponding to an extent size of the second storage tier. The second storage tier uses a second cache line size corresponding to an extent size of the third storage tier. The second cache line size is significantly larger than the first cache line size. The method further maintains, in the first storage tier, a first cache directory indicating which extents from the second storage tier are cached in the first storage tier, and a second cache directory indicating which extents from the third storage tier are cached in the second storage tier. | 08-08-2013 |
20130205093 | MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided. | 08-08-2013 |
20130205094 | EFFICIENT TRACK DESTAGE IN SECONDARY STORAGE - For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. | 08-08-2013 |
20130205109 | DATA ARCHIVING USING DATA COMPRESSION OF A FLASH COPY - Embodiments of the disclosure relate to archiving data in a storage system. An exemplary embodiment comprises making a flash copy of data in a source volume, compressing data in the flash copy wherein each track of data is compressed into a set of data pages, and storing the compressed data pages in a target volume. Data extents for the target volume may be allocated from a pool of compressed data extents. After each stride worth of data is compressed and stored in the target volume, data may be destaged to avoid destage penalties. Data from the target volume may be decompressed from a flash copy of the target volume in a reverse process to restore each data track, when the archived data is needed. Data may be compressed and uncompressed using a Lempel-Ziv-Welch process. | 08-08-2013 |
20130219124 | EFFICIENT DISCARD SCANS - A plurality of tracks is examined for meeting criteria for a discard scan. In lieu of waiting for a completion of a track access operation, at least one of the plurality of tracks is marked for demotion. An additional discard scan may be subsequently performed for tracks not previously demoted. The discard and additional discard scans may proceed in two phases. | 08-22-2013 |
20130232294 | ADAPTIVE CACHE PROMOTIONS IN A TWO LEVEL CACHING SYSTEM - Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache. | 09-05-2013 |
20130232295 | ADAPTIVE CACHE PROMOTIONS IN A TWO LEVEL CACHING SYSTEM - Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache. | 09-05-2013 |
20130235709 | PERIODIC DESTAGES FROM INSIDE AND OUTSIDE DIAMETERS OF DISKS TO IMPROVE READ RESPONSE TIMES - A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied. | 09-12-2013 |
20130246691 | ADAPTIVE PRESTAGING IN A STORAGE CONTROLLER - In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Other features and aspects may be realized, depending upon the particular application. | 09-19-2013 |
20130304968 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING AN OCCUPANCY OF VALID TRACKS IN STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides. | 11-14-2013 |
20130332645 | SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache. | 12-12-2013 |
20130332646 | PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations with respect to the area of the cache. | 12-12-2013 |
20140047187 | ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress. | 02-13-2014 |
20140068163 | PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache. | 03-06-2014 |
20140068189 | ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress. | 03-06-2014 |
20140068191 | SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache. | 03-06-2014 |
20140075110 | REPLICATING TRACKS FROM A FIRST STORAGE SITE TO A SECOND AND THIRD STORAGE SITES - Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage. | 03-13-2014 |
20140075114 | REPLICATING TRACKS FROM A FIRST STORAGE SITE TO A SECOND AND THIRD STORAGE SITES - Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage. | 03-13-2014 |
20140082256 | RECOVERY FROM CACHE AND NVS OUT OF SYNC - For cache/data management in a computing storage environment, incoming data segments into a Non Volatile Storage (NVS) device of the computing storage environment are validated against a bitmap to determine if the incoming data segments are currently in use. Those of the incoming data segments determined to be currently in use are designated to the computing storage environment to protect data integrity. | 03-20-2014 |
20140082277 | EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited. | 03-20-2014 |
20140082283 | EFFICIENT CACHE VOLUME SIT SCANS - A processor, operable in a computing storage environment, allocates portions of a Scatter Index Table (SIT) disproportionately between a larger portion dedicated for meta data tracks, and a smaller portion dedicated for user data tracks, and processes a storage operation through the disproportionately allocated portions of the SIT using an allocated number of Task Control Blocks (TCB). | 03-20-2014 |
20140082294 | MANAGEMENT OF DESTAGE TASKS WITH LARGE NUMBER OF RANKS - A processor, operable in a computing storage environment, for each rank in a storage management device in the computing storage environment, allocates a lower maximum count, and a higher maximum count, of Task Control Blocks (TCBs) to be implemented for performing a storage operation, and performs the storage operation using up to the lower maximum count of TCBs, yet only allows those TCBs above the lower maximum count to be allocated for performing the storage operation satisfying at least one criterion. | 03-20-2014 |
20140082303 | MANAGEMENT OF DESTAGE TASKS WITH LARGE NUMBER OF RANKS - A processor, operable in a computing storage environment, for each rank in a storage management device in the computing storage environment, allocates a lower maximum count, and a higher maximum count, of Task Control Blocks (TCBs) to be implemented for performing a storage operation, and performs the storage operation using up to the lower maximum count of TCBs, yet only allows those TCBs above the lower maximum count to be allocated for performing the storage operation satisfying at least one criterion. | 03-20-2014 |
20140082631 | PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group. Upon a determined imbalance between dispatch queue depths greater than a predetermined threshold, the set of like tasks is reassigned to an additional group. | 03-20-2014 |
20140095762 | FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A system for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of incremented and decremented per each write and discard storage operation, while the second, fuzzy, group is one of incremented and decremented on a more infrequent basis as compared to the first, accurate group. | 04-03-2014 |
20140095763 | NVS THRESHOLDING FOR EFFICIENT DATA MANAGEMENT - For data management by a processor device in a computing storage environment, a threshold for an amount of Non Volatile Storage (NVS) space to be consumed by any particular logically contiguous storage space in the computing storage environment is established based on at least one of a Redundant Array of Independent Disks (RAID) type, a number of point-in-time copy source data segments in the logically contiguous storage space, and a storage classification. | 04-03-2014 |
20140095787 | NVS THRESHOLDING FOR EFFICIENT DATA MANAGEMENT - For data management by a processor device in a computing storage environment, a threshold for an amount of Non Volatile Storage (NVS) space to be consumed by any particular logically contiguous storage space in the computing storage environment is established based on at least one of a Redundant Array of Independent Disks (RAID) type, a number of point-in-time copy source data segments in the logically contiguous storage space, and a storage classification. | 04-03-2014 |
20140095811 | FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A system for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of incremented and decremented per each write and discard storage operation, while the second, fuzzy, group is one of incremented and decremented on a more infrequent basis as compared to the first, accurate group. | 04-03-2014 |
20140122808 | PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A point-in-time copy relationship associates tracks in the source storage with tracks in the target storage, wherein the target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship, wherein the point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request before destaging the updated source track to the source storage. | 05-01-2014 |
20140136790 | SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - A system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count. | 05-15-2014 |
20140156936 | SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - Destaging storage tracks from each rank that includes a greater than a predetermined percentage of a predetermined amount of storage space with respect to a current amount of storage space allocated to each rank until the current amount of storage space used by each respective rank is equal to the predetermined percentage of the predetermined amount of storage space. The destage storage tracks are declined from being destaged from each rank that includes less than or equal to the predetermined percentage of the predetermined amount of storage space rank. | 06-05-2014 |
20140156937 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. The storage tracks are refrained from being destaged from the write cache if the at least one host is not idle. Each rank is monitored for write operations from the at least one host, and a determination is made if the at least one host is idle with respect to each respective rank based on monitoring each rank for write operations from the at least one host such that the at least one host may be determined to be idle with respect to a first rank and not idle with respect to a second rank. | 06-05-2014 |
20140201448 | MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS - For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. | 07-17-2014 |
20140207995 | USE OF DIFFERING GRANULARITY HEAT MAPS FOR CACHING AND MIGRATION - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a Solid State Drive (SSD) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache. | 07-24-2014 |
20140207999 | PERFORMING STAGING OR DESTAGING BASED ON THE NUMBER OF WAITING DISCARD SCANS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed. | 07-24-2014 |
20140208017 | THINLY PROVISIONED FLASH CACHE WITH SHARED STORAGE POOL - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments. | 07-24-2014 |
20140208018 | TIERED CACHING AND MIGRATION IN DIFFERING GRANULARITIES - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage, clumped hot ones of the groups of data segments are migrated to use the SSD portion while using the lower-speed cache for a remaining portion of the clumped hot ones, and sparsely hot ones of the groups of data segments are migrated to use the lower-speed cache while using a lower one of the tiered levels of storage for a remaining portion of the sparsely hot ones. | 07-24-2014 |
20140208020 | USE OF DIFFERING GRANULARITY HEAT MAPS FOR CACHING AND MIGRATION - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a Solid State Drive (SSD) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache. | 07-24-2014 |
20140208021 | THINLY PROVISIONED FLASH CACHE WITH SHARED STORAGE POOL - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments. | 07-24-2014 |
20140208029 | USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target. | 07-24-2014 |
20140208032 | USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target. | 07-24-2014 |
20140208036 | PERFORMING STAGING OR DESTAGING BASED ON THE NUMBER OF WAITING DISCARD SCANS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed. | 07-24-2014 |
20140304479 | GROUPING TRACKS FOR DESTAGING - Tracks are selected for destaging from a least recently used (LRU) list and the selected tracks are moved to a destaging wait list. The selected tracks are grouped and destaged from the destaging wait list. | 10-09-2014 |
20140351532 | MINIMIZING DESTAGING CONFLICTS - Destage grouping of tracks is restricted to a bottom portion of a least recently used (LRU) list without grouping the tracks at a most recently used end of the LRU list to avoid the destaging conflicts. The destage grouping of tracks is destaged from the bottom portion of the LRU list. | 11-27-2014 |
20140365718 | DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride. | 12-11-2014 |
20150019810 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 01-15-2015 |
20150026409 | DEFERRED RE-MRU OPERATIONS TO REDUCE LOCK CONTENTION - Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis. A global lock for the set of operations is periodically acquired, the set of operations is performed, and the global lock is freed so as to avoid excessive duty cycling of lock and unlock operations in the computing storage environment. | 01-22-2015 |
20150032957 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 01-29-2015 |
20150040135 | THRESHOLDING TASK CONTROL BLOCKS FOR STAGING AND DESTAGING - For thresholding task control blocks (TCBs) for staging and destaging, a first tier of TCBs are reserved for guaranteeing a minimum number of TCBs for staging and destaging for storage ranks An additional number of requested TCBs are apportioned from a second tier of TCBs to each of the storage ranks based on a scaling factor that is calculated at predefined time intervals. | 02-05-2015 |
20150046649 | MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled. | 02-12-2015 |
20150052529 | EFFICIENT TASK SCHEDULING USING A LOCKING MECHANISM - For efficient task scheduling using a locking mechanism, a new task is allowed to spin on the locking mechanism if a number of tasks spinning on the locking mechanism is less than a predetermined threshold for parallel operations requiring locks between the multiple threads. | 02-19-2015 |
20150058560 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 02-26-2015 |
20150058561 | INCREASED DESTAGING EFFICIENCY FOR SMOOTHING OF DESTAGE TASKS BASED ON SPEED OF DISK DRIVES - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, the ramp up of the destaging tasks is adjusted based on speed of disk drives when smoothing the destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks by calculating destaging tasks according to one of a standard time interval and a variable recomputed destaging task interval. | 02-26-2015 |
20150095561 | PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE - For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, a preference of movement to lower speed cache level is implemented based on at least one of an amount of holes and a data heat metric. If a first bit has at least one of a lower amount of holes and a hotter data heat metric, it is moved to the lower speed cache level ahead of a second bit that has at least one of a higher amount of holes and a cooler data heat. If the first bit has a hotter data heat and greater than a predetermined number of holes, the first bit is discarded. | 04-02-2015 |
20150121007 | ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress. | 04-30-2015 |
20150127904 | ASSIGNING DEVICE ADAPTORS TO USE TO COPY SOURCE EXTENTS TO TARGET EXTENTS IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for assigning device adaptors to use to copy source extents in source ranks to target extents in target ranks in a copy relation. A determination is made of an order of the target ranks in the copy relation. Target ranks in the copy relation are selected according to the determined order. For each selected target rank, indication is made in a device adaptor assignment data structure of a source device adaptor and target device adaptor of the device adaptors to use to copy the source rank to the selected target rank indicated in the copy relation, wherein indication is made for the selected target ranks according to the determined order. The source ranks are copied to the selected target ranks using the source and target device adaptors indicated in the device adaptor assignment data structure. | 05-07-2015 |
20150127913 | EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. | 05-07-2015 |
20150134878 | NONVOLATILE STORAGE THRESHOLDING FOR ULTRA-SSD, SSD, AND HDD DRIVE INTERMIX - Embodiments for efficient thresholding of nonvolatile storage (NVS) for a plurality of types of storage rank groups by a processor. Target storage devices are determined in a pool of target storage devices as one of a hard disk drive (HDD) and a solid-state drive (SSD) device. Each target storage device classified into an SSD rank group, a Nearline rank group, an Enterprise rank group, and an Ultra-SSD rank group in the pool of target storage devices. The Nearline rank group and the Enterprise rank group comprise a HDD rank group, and the Nearline rank group, the Enterprise rank group, and the SSD rank group comprise the Non-Ultra-SSD rank group. Thresholds are adjusted for preventing space allocation in the NVS for at least one of the classified target storage devices based on one of the presence and absence of identified types of the classified target storage devices. | 05-14-2015 |
20150134914 | DESTAGE GROUPING FOR SEQUENTIAL FAST WRITE TRACKS - An amount of sequential fast write (SFW) Tracks are metered by providing an adjustable threshold for performing a destage scan that moves the SFW tracks from a SFW least recently used (LRU) list to a destaging wait list (DWL). Priorities are set for the destaging of the SFW tracks from the DWL. | 05-14-2015 |
20150227323 | USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target, and changing a logical address of the at least one data segment to point to the target. | 08-13-2015 |
20150227455 | MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided. | 08-13-2015 |
20150227467 | TIERED CACHING AND MIGRATION IN DIFFERING GRANULARITIES - For data processing in a computing storage environment by a processor device, the environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that clumped uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage; uniformly hot groups of data segments are determined using a first, largest granulated, heat map for a selected one of the group of the data segments; a second heat map, which is smaller than the first and having the largest granularity of the first heat map, is used to determine the clumped hot groups; and sparsely hot groups are determined when neither the first heat map nor the second heat map are hotter than the first and second predetermined thresholds, respectively. | 08-13-2015 |
20150227487 | EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks is initiated, and the at least one complete data track is removed off of a free list by a first I/O waiter. | 08-13-2015 |
20150242125 | EFFICIENT FREE-SPACE MANAGEMENT OF MULTI-TARGET PEER-TO-PEER REMOTE COPY (PPRC) MODIFIED SECTORS BITMAP IN BIND SEGMENTS - For efficient free-space management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap in bind segments, maintaining a list of bind segments having a free slots for each storage volume. Each one of the bind segments includes a bitmap of the free slots. Those of the bind segments having more than an predetermined number of the free slots are freed. | 08-27-2015 |
20150242126 | EFFICIENT CACHE MANAGEMENT OF MULTI-TARGET PEER-TO-PEER REMOTE COPY (PPRC) MODIFIED SECTORS BITMAP - For efficient cache management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap in a computing storage environment a multiplicity of PPRC modified sectors bitmaps are dynamically managed by placing the multiplicity of PPRC modified sectors bitmaps into slots of bind segments. | 08-27-2015 |
20150242127 | OPTIMIZING PEER-TO-PEER REMOTE COPY (PPRC) TRANSFERS FOR PARTIAL WRITE OPERATIONS - For optimizing peer-to-peer remote copy (PPRC) transfers for partial write operations in a computing storage environment by a processor device by maintaining a PPRC modified sectors bitmap in bind segments upon demoting a track out of a cache for transferring a partial track after the demoting the track, wherein a hash table is used for locating the PPRC modified sectors bitmap. | 08-27-2015 |
20150242316 | ASYNCHRONOUS CLEANUP AFTER A PEER-TO-PEER REMOTE COPY (PPRC) TERMINATE RELATIONSHIP OPERATION - For asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation in a computing storage environment by a processor device, asynchronously cleaning up a plurality of PPRC modified sectors bitmaps using a PPRC terminate-relationship cleanup operation by throttling a number of tasks performing the PPRC terminate-relationship cleanup operation while releasing a plurality of bind segments until completion of the PPRC terminate-relationship cleanup operation. | 08-27-2015 |
20150248239 | CASCADED, POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cascaded architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume, wherein each of the volume x and the child volume have a target bit map (TBM) associated therewith. The method then determines whether the TBMs of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Finding the HS volume includes comparing ages of mapping relationships upstream from the volume x in order to determine a source of the data. Once the HS volume is found, the method copies the data from the HS volume to the child volume and performs the write to the volume x. A method for performing a read is also disclosed herein. | 09-03-2015 |
20150261440 | ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. | 09-17-2015 |
20150261441 | ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. | 09-17-2015 |
20150261453 | GROUPING OF TRACKS FOR COPY SOURCE TO TARGET DESTAGE ON GLOBAL MIRROR SECONDARY - For performing efficient full-stride copy source-to-target operations in a computing storage environment by a processor device, pursuant to a destage operation, a determination is made whether to destage a full stride or one track of data on a target volume by comparing a counted number of modified tracks for the full stride against a predetermined threshold. | 09-17-2015 |
20150261678 | MANAGING SEQUENTIALITY OF TRACKS FOR ASYNCHRONOUS PPRC TRACKS ON SECONDARY - For performing efficient management of tracks in an asynchronous Peer-to-Peer Redundant Copy (PPRC) operation in a computing storage environment, a correct status of a sequential bit is determined by performing one of: (1) examining a primary cache, where if data being transferred pursuant to the PPRC operation in a primary track remains in the primary cache, the sequential bit setting found therein is used, and (2) an Out-Of-Sync (OOS) bitmap is examined to determine if the sequential bit is set. | 09-17-2015 |
20150261685 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Storage tracks from at least one host are destaged from the write cache rank when it is determined that the at least one host is idle with respect to a first set of ranks, and storage tracks are refrained from being destaged from each rank when it is determined that the at least one host is not idle with respect to a second set of ranks such that storage tracks in the first set of ranks may be destaged while storage tracks in the second set of ranks are not being destaged. | 09-17-2015 |
20150261689 | SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Storage tracks from at least one server are destaged from the write cache rank when it is determined that the at least one server is idle with respect to a first set of ranks, and storage tracks are refrained from being destaged from each rank when it is determined that the at least one server is not idle with respect to a second set of ranks such that storage tracks in the first set of ranks may be destaged while storage tracks in the second set of ranks are not being destaged. | 09-17-2015 |
20150268887 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 09-24-2015 |
20150268892 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 09-24-2015 |
20150278115 | WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride. | 10-01-2015 |
20150286418 | TIERED CACHING AND MIGRATION IN DIFFERING GRANULARITIES - For data processing in a distributed computing storage environment by a processor device, the distributed computing environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments and clumped hot ones of the data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage; uniformly hot groups of data segments are determined using a first, largest granulated, heat map for a selected one of the group of the data segments; and a second heat map, which is smaller than the first and having the largest granularity of the first heat map, is used to determine the clumped hot groups. | 10-08-2015 |
20150286572 | EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - Various embodiments for cache management in a distributed computing storage environment are provided. In one embodiment, a processor device, for a plurality of input/output (I/O) operations, initiates a process, separate from a process responsible for data segment assembly, for waking a predetermined number of waiting I/O operations. | 10-08-2015 |
20150286580 | MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS - For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache, including considering an Input/Output Performance (IOP) metric, a bandwidth metric, and a garbage collection metric, and a whole data segment is promoted containing the one of the partial data segments to both the lower and higher levels of cache. | 10-08-2015 |
20150301855 | PREFERENTIAL CPU UTILIZATION FOR TASKS - In a distributed server storage environment, a set of like tasks to be performed is organized into a first group, and a last used processing group associated with the like tasks is stored. Upon a subsequent dispatch, the last used processing group is compared to other processing groups and the tasks are assigned to a processing group based upon a predetermined threshold. | 10-22-2015 |
20150301862 | PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group, and a last used processing group associated with the like tasks is stored. Upon a subsequent dispatch, the last used processing group is compared to other processing groups and the tasks are assigned to a processing group based upon a predetermined threshold. | 10-22-2015 |
20150309747 | REPLICATING TRACKS FROM A FIRST STORAGE SITE TO A SECOND AND THIRD STORAGE SITES - Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage. | 10-29-2015 |
20150331614 | USING QUEUES CORRESPONDING TO ATTRIBUTE VALUES ASSOCIATED WITH UNITS OF WORK AND SUB-UNITS OF THE UNIT OF WORK TO SELECT THE UNITS OF WORK AND THEIR SUB-UNITS TO PROCESS - Provided are a computer program product, system, and method for using queues corresponding to attribute values associated with units of work and sub-units of the unit of work to select the units of work and their sub-units to process. There are a plurality of work unit queues, each associated with different work unit attribute values that are associated with units of work, wherein the work unit queues include records for units of work to process having work unit attribute values associated with the work unit attribute values of the work unit queues. There are a plurality of work sub-unit queues, wherein each are associated with different work sub-unit attribute values that are associated with sub-units of work. Records are added for work sub-units of a unit of work to the work sub-unit queues, and records are selected from the work sub-unit queues to process the sub-units of work. | 11-19-2015 |
20150331710 | USING QUEUES CORRESPONDING TO ATTRIBUTE VALUES ASSOCIATED WITH UNITS OF WORK TO SELECT THE UNITS OF WORK TO PROCESS - Provided are a computer program product, system, and method for using queues corresponding to attribute values associated with units of work to select the units of work to process. A plurality of queues for each of a plurality of attribute types of attributes are associated with the units of work to process, wherein there are queues for different possible attribute values for each of the attribute types. A unit of work to process is received. A determination is made for each of the attribute types at least one of the queues corresponding to at least one attribute value for the attribute type associated with the received unit of work. A record for the received unit of work is added to each of the determined queues. | 11-19-2015 |
20150331712 | CONCURRENTLY PROCESSING PARTS OF CELLS OF A DATA STRUCTURE WITH MULTIPLE PROCESSES - Provided are a computer program product, system, and method for concurrently processing parts of cells of a data structure with multiple processes. Information is provided to indicate a partitioning of the cells of the data structure into a plurality of parts, and having a cursor pointing to a cell in the part. Processes concurrently process different parts of the data structure by performing: determining from the cursor for the part one of the cells in the part to process; processing the cells from the cursor to determine whether to process the unit of work corresponding to the cell; and setting the cursor to identify one of the cells from which processing is to continue in a subsequent iteration in response to processing the units of work for a plurality of the processed cells. | 11-19-2015 |
20150331716 | USING QUEUES CORRESPONDING TO ATTRIBUTE VALUES AND PRIORITIES ASSOCIATED WITH UNITS OF WORK AND SUB-UNITS OF THE UNIT OF WORK TO SELECT THE UNITS OF WORK AND THEIR SUB-UNITS TO PROCESS - Provided are a computer program product, system, and method for using queues corresponding to attribute values and priorities associated with units of work and sub-units of the unit of work to select the units of work and their sub-units to process. There are a plurality of work unit queues, wherein each of the work unit queues are associated with different work unit attribute values that are associated with units of work, wherein a plurality of the work unit queues include records for units of work to process having work unit attribute values associated with the work unit attribute values of the work unit queues, and wherein the work unit queues are each associated with a different priority. A record for a unit of work to perform is added to the work unit queue associated with a priority and work unit attribute value associated with the work unit. | 11-19-2015 |
20150339074 | ASSIGNING DEVICE ADAPTORS TO USE TO COPY SOURCE EXTENTS TO TARGET EXTENTS IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for assigning device adaptors to use to copy source extents in source ranks to target extents in target ranks in a copy relation. A determination is made of an order of the target ranks in the copy relation. Target ranks in the copy relation are selected according to the determined order. For each selected target rank, indication is made in a device adaptor assignment data structure of a source device adaptor and target device adaptor of the device adaptors to use to copy the source rank to the selected target rank indicated in the copy relation, wherein indication is made for the selected target ranks according to the determined order. The source ranks are copied to the selected target ranks using the source and target device adaptors indicated in the device adaptor assignment data structure. | 11-26-2015 |
20150339182 | FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A system for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of updated on a per operation basis, while the second, fuzzy, group is one of updated on a more infrequent basis as compared to the first, accurate group. | 11-26-2015 |
20150347318 | THINLY PROVISIONED FLASH CACHE WITH SHARED STORAGE POOL - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments, where if selected data segments are cached in the lower-speed cache and are determined to become uniformly hot, the selected group from the lower-speed cache are migrated to the SSD tier. | 12-03-2015 |
20150378909 | PERFORMING STAGING OR DESTAGING BASED ON THE NUMBER OF WAITING DISCARD SCANS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed. | 12-31-2015 |
20150378929 | SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache. | 12-31-2015 |
20160004456 | SELECTIVE SPACE RECLAMATION OF DATA STORAGE MEMORY EMPLOYING HEAT AND RELOCATION METRICS - Space of a data storage memory of a data storage memory system is reclaimed by determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation. | 01-07-2016 |
20160026578 | USE OF DIFFERING GRANULARITY HEAT MAPS FOR CACHING AND MIGRATION - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that if a selected group is cached in the lower-speed cache and is determined to become uniformly hot, migrating the selected group from the lower-speed cache to the SSD portion while refraining from processing data retained in the lower-speed cache until the selected group is fully migrated to the SSD portion. | 01-28-2016 |
20160055090 | ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. | 02-25-2016 |
20160055091 | FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A method for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of updated on a per operation basis, while the second, fuzzy, group is one of updated on a more infrequent basis as compared to the first, accurate group. | 02-25-2016 |
20160055092 | ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records. | 02-25-2016 |
20160085454 | PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache. | 03-24-2016 |
20160085673 | CONCURRENT UPDATE OF DATA IN CACHE WITH DESTAGE TO DISK - Mechanisms for concurrent update of data in cache with destage of data from the cache to disk by a processor device. A second copy of the data is established in the cache and on a cache directory. A first copy of the data and the second copy of the data are adjacently ordered in the cache directory. One of the first and second copies is held for an update operation so as to include a latest data modification, while the remaining copy concurrently is used for a destage operation to disk. | 03-24-2016 |
20160098295 | INCREASED CACHE PERFORMANCE WITH MULTI-LEVEL QUEUES OF COMPLETE TRACKS - Exemplary method, system, and computer program product embodiments for increased cache performance using multi-level queues by a processor device. The method includes distributing to each one of a plurality of central processing units (CPUs) workload operations for creating complete tracks from partial tracks, creating sub-queues of the complete tracks for distributing to each one of the CPUs, and creating demote scan tasks based on workload of the CPUs. Additional system and computer program product embodiments are disclosed and provide related advantages. | 04-07-2016 |
20160098296 | TASK POOLING AND WORK AFFINITY IN DATA PROCESSING - Mechanisms for improving computing system performance by a processor device. System resources are organized into a plurality of groups. Each of the plurality of groups is assigned one of a plurality of predetermined task pools. Each of the predetermined task pools has a plurality of tasks. Each of the plurality of groups corresponds to at least one physical boundary of the system resources such that a speed of an execution of those of the plurality of tasks for a particular one of the plurality of predetermined task pools is optimized by a placement of an association with the at least one physical boundary and the plurality of groups. | 04-07-2016 |