Patents - stay tuned to the technology

Inventors list

Assignees list

Classification tree browser

Top 100 Inventors

Top 100 Assignees


Gupta, AZ

Amit K. Gupta, Scottsdale, AZ US

Patent application numberDescriptionPublished
20160048442SYSTEMS AND METHODS FOR NEW PRODUCT INTEGRATION - The system integrates transaction account issuers, merchants, and consumers. A transaction account issuer may provide one or more APIs to merchants. The transaction account issuer may provide a sandbox environment for merchants to test applications with the APIs. The transaction account issuer may provide documentation to assist the merchants in integrating with the transaction account issuer. The transaction account issuer may notify the merchants of any changes to the documentation.02-18-2016

Anand Gupta, Phoenix, AZ US

Patent application numberDescriptionPublished
20080210302Methods and apparatus for forming photovoltaic cells using electrospray - Methods of forming a photovoltaic structures including nanoparticles are disclosed. The method includes electrospray deposition of nanoparticles. The nanoparticles can include TiO09-04-2008

Anand Gupta, Tucson, AZ US

Patent application numberDescriptionPublished
20120200509HAPTICS EFFECT CONTROLLER ARCHITECTURE AND INSTRUCTION SET - A method for generating a desired haptics effect is provided. A haptics effect instruction is generated by a host processor responsive to a touch screen, where the haptics effect instruction corresponds to the desired haptics effect. This haptics effect instruction is received by a haptics driver, and a haptic profile from the haptics effect instruction is generated from the haptics effect instruction. The haptic profile includes at least one of a profile word, a move word, wait/halt word, and a branch word, and a sine wave is generated from the from the haptic profile that corresponds to the desired haptics effect.08-09-2012

Ashish X. Gupta, Chandler, AZ US

Patent application numberDescriptionPublished
20100164510LIQUID TIM DISPENSE AND REMOVAL METHOD AND ASSEMBLY - In some embodiments, a liquid TIM dispense and removal method and assembly is presented. In this regard, a method is introduced including loading an absorbent material of a thermal control unit with a liquid thermal interface material (TIM), pressing the absorbent material against an integrated circuit device causing the liquid TIM to be released, testing the integrated circuit device, and removing the absorbent material from against the integrated circuit device causing the liquid TIM to be reabsorbed. Other embodiments are also disclosed and claimed.07-01-2010
20110109335Direct liquid-contact micro-channel heat transfer devices, methods of temperature control for semiconductive devices, and processes of forming same - An apparatus to test a semiconductive device includes a base plane that holds at least one heat-transfer fluid unit cell. The at least one heat-transfer fluid unit cell includes a fluid supply structure including a supply-orifice cross section as well as a fluid return structure including a return-orifice cross section. The supply-orifice cross section is greater than the return-orifice cross section. A die interface is also included to be a liquid-impermeable material.05-12-2011

Debabrata Gupta, Scottsdale, AZ US

Patent application numberDescriptionPublished
20120145442INTERCONNECT STRUCTURE - A microelectronic assembly includes a first surface and a first thin conductive element exposed at the first surface and having a face comprising first and second regions. A first conductive projection having a base connected to and covering the first region of the face extends to an end remote from the base. A first dielectric material layer covers the second region of the first thin element and contacts at least the base of the first conductive projection. The assembly further includes a second substrate having a second face and a second conductive projection extending away from the second face. A first fusible metal mass connects the first projection to the second projection and extends along an edge of the first projection towards the first dielectric material layer.06-14-2012
20150014850INTERCONNECT STRUCTURE - A microelectronic assembly includes first and second surfaces, a first thin conductive element, a first conductive projection, and a first fusible mass. The first thin conductive element includes a face that has first and second regions. The first conductive projection covers the first region of the first face. A barrier may be formed along a portion of the first region. The second face includes a second conductive projection that extends away therefrom. The first fusible metal mass connects the first conductive projection to the second conductive projection such that the first surface of the first face is oriented toward the second surface of the second substrate. The first mass extends along a portion of the first conductive projection to a location toward the first edge of the barrier. The barrier is disposed between the first thin element and the first metal mass.01-15-2015

Hoshin V. Gupta, Tucson, AZ US

Patent application numberDescriptionPublished
20160055125METHODS AND SYSTEMS FOR DETERMINING GLOBAL SENSITIVITY OF A PROCESS - Systems and methods for determining the sensitivity of a model to a factor are disclosed. A directional variogram corresponding to a response surface of the model is determined. The variogram is then output as an indication of the sensitivity of the model.02-25-2016

Lokesh M. Gupta, Tucson, AZ US

Patent application numberDescriptionPublished
20100318744DIFFERENTIAL CACHING MECHANISM BASED ON MEDIA I/O SPEED - A method for allocating space in a cache based on media I/O speed is disclosed herein. In certain embodiments, such a method may include storing, in a read cache, cache entries associated with faster-responding storage devices and cache entries associated with slower-responding storage devices. The method may further include implementing an eviction policy in the read cache. This eviction policy may include demoting, from the read cache, the cache entries of faster-responding storage devices faster than the cache entries of slower-responding storage devices, all other variables being equal. In certain embodiments, the eviction policy may further include demoting, from the read cache, cache entries having a lower read-hit ratio faster than cache entries having a higher read-hit ratio, all other variables being equal. A corresponding computer program product and apparatus are also disclosed and claimed herein.12-16-2010
20100325356NONVOLATILE STORAGE THRESHOLDING - Embodiments for facilitating data transfer between a nonvolatile storage (NVS) write cache and a pool of target storage devices are provided. Each target storage device in the pool of target storage devices is determined as one of a hard disk drive (HDD) and a solid-state drive (SSD) device, and classified into one of a SSD rank group and a HDD rank group. If no data is received in the NVS write cache for a predetermined time to be written to a target storage device classified in the SSD rank group, a threshold of available space in the NVS write cache is set to allocate at least a majority of the available space to the HDD rank group. Upon receipt of a write request for the SSD rank group, the threshold of the available space is reduced to allocate a greater portion of the available space to the SSD rank group.12-23-2010
20110087837SECONDARY CACHE FOR WRITE ACCUMULATION AND COALESCING - A method for efficiently using a large secondary cache is disclosed herein. In certain embodiments, such a method may include accumulating, in a secondary cache, a plurality of data tracks. These data tracks may include modified data and/or unmodified data. The method may determine if a subset of the plurality of data tracks makes up a full stride. In the event the subset makes up a full stride, the method may destage the subset from the secondary cache. By destaging full strides, the method reduces the number of disk operations that are required to destage data from the secondary cache. A corresponding computer program product and apparatus are also disclosed and claimed herein.04-14-2011
20110191534DYNAMIC MANAGEMENT OF DESTAGE TASKS IN A STORAGE CONTROLLER - Method, system, and computer program product embodiments for facilitating data transfer from a write cache and NVS via a device adapter to a pool of storage devices by a processor or processors are provided. The processor(s) adaptively varies the destage rate based on the current occupancy of the NVS for a particular storage device and stage activity related to that storage device. The stage activity includes one or more of the storage device stage activity, device adapter stage activity, device adapter utilized bandwidth and the read/write speed of the storage device. These factors are generally associated with read response time in the event of a cache miss and not ordinarily associated with dynamic management of the destage rate. This combination maintains the desired overall occupancy of the NVS while improving response time performance.08-04-2011
20110196987COMPRESSION ON THIN PROVISIONED VOLUMES USING EXTENT BASED MAPPING - Method, system, and computer program product embodiments for facilitating data compression are provided. A set of logical extents, each having compressed logical tracks of data, is mapped to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent. Pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, the write operation is mapped to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent.08-11-2011
20110202708Integrating A Flash Cache Into Large Storage Systems - An I/O enclosure module is provided with one or more I/O enclosures having a plurality of slots for receiving electronic devices. A host adapter is connected a first slot of the I/O enclosure module and is configured to connect a host to the I/O enclosure. A device adapter is connected to a second slot of the I/O enclosure module and is configured to connect a storage device to the I/O enclosure module. A flash cache is connected to a third slot of the I/O enclosure module and includes a flash-based memory configured to cache data associated with data requests handled through the I/O enclosure module. A primary processor complex manages data requests handled through the I/O enclosure module by communicating with the host adapter, device adapter, and flash cache to manage to the data requests.08-18-2011
20120079187MANAGEMENT OF WRITE CACHE USING STRIDE OBJECTS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, identifying working data on a stride basis by a processor device are provided. A multi-update bit is established for each stride in a modified cache. The multi-update bit is adapted to indicate at least one track in a working set. A schedule of destage scans is configured based on a plurality of levels of urgency. A destage operation is performed based on at least one of a number of strides examined by the destage scans, whether the multi-update bit is set, and whether an emergency level of the plurality of levels of urgency is active.03-29-2012
20120079199INTELLIGENT WRITE CACHING FOR SEQUENTIAL TRACKS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, write caching for sequential tracks by a processor device are provided. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation.03-29-2012
20120089795MULTIPLE INCREMENTAL VIRTUAL COPIES - Provided are techniques for, in response to establishing each incremental virtual copy from a source to a target, creating a target change recording structure for the target. While performing destage to a source data block at the source, it is determined that there is at least one incremental virtual copy target for this source data block. For each incremental virtual copy relationship where the source data block is newer than the incremental virtual copy relationship and an indicator is set in a target inheritance structure on the target for a corresponding target data block, the source data block is copied to each corresponding target data block, and an indicator is set in each target change recording structure on each target for the target data block corresponding to the source data block being destaged.04-12-2012
20120151140SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - Systems and methods for destaging storage tracks from cache are provided. One system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count. Also provided are physical computer storage mediums including a computer program product for performing the above method.06-14-2012
20120151147SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - Systems and methods for managing destage conflicts in cache are provided. One system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank. Also provided are physical computer storage mediums including code that, when executed by a processor, cause the processor to perform the above method.06-14-2012
20120151148SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Systems and methods for background destaging storage tracks from cache when one or more hosts are idle are provided. One system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. Also provided are physical computer storage mediums including a computer program product for performing the above method.06-14-2012
20120151151SYSTEMS AND METHODS FOR MANAGING CACHE DESTAGE SCAN TIMES - Systems and methods for managing destage scan times in a cache are provided. One system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. Physical computer storage mediums including a computer program product for performing the above method are also provided.06-14-2012
20120185648STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary method, system, and computer program embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap.07-19-2012
20120191904SECONDARY CACHE FOR WRITE ACCUMULATION AND COALESCING - A method for efficiently using a large secondary cache is disclosed herein. In certain embodiments, such a method may include accumulating, in a secondary cache, a plurality of data tracks. These data tracks may include modified data and/or unmodified data. The method may determine if a subset of the plurality of data tracks makes up a full stride. In the event the subset makes up a full stride, the method may destage the subset from the secondary cache. By destaging full strides, the method reduces the number of disk operations that are required to destage data from the secondary cache. A corresponding computer program product and apparatus are also disclosed herein.07-26-2012
20120198148ADAPTIVE PRESTAGING IN A STORAGE CONTROLLER - In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Still further, a cache prestaging operation in accordance with further aspects may decrease one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from prestaged tracks being demoted before they are used. Conversely, a cache prestaging operation in accordance with another aspect may increase one or both of the prestage trigger and the prestage amount as a function of the drive speed in circumstances such as a cache miss which may have resulted from waiting for a stage to complete. In yet another aspect, the prestage trigger may not be limited by the prestage amount. Instead, the pre-stage trigger may be permitted to expand as conditions warrant it by prestaging additional tracks and thereby effectively increasing the potential range for the prestage trigger. Other features and aspects may be realized, depending upon the particular application.08-02-2012
20120198150ASSIGNING DEVICE ADAPTORS AND BACKGROUND TASKS TO USE TO COPY SOURCE EXTENTS TO TARGET EXTENTS IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for assigning device adaptors and background tasks to use to copy source extents to target extents in a copy relationship. A relation is provided of a plurality of source extents in source ranks to copy to a plurality of target extents in target ranks in the storage system. One target rank in the relation is used to determine an order in which the target ranks in the relation are selected to register for copying. For each selected target rank in the relation selected according to the determined order, an iteration of a registration operation is performed to register the selected target rank and a selected source rank copied to the selected target rank in the relation. The registration operation comprises indicating in a device adaptor assignment data structure a source device adaptor and target device adaptor to use to copy the selected rank to the selected target rank and adding an entry to a priority queue for the relation for the selected target rank. The selected source rank is copied to the selected target rank using as the source and target device adaptors indicated in the device adaptor assignment data structure for the selected target rank in response to processing the entry in the priority queue added to the priority queue for the selected target rank.08-02-2012
20120203983COMPRESSION ON THIN PROVISIONED VOLUMES USING EXTENT BASED MAPPING - A set of logical extents, each having compressed logical tracks of data, is mapped to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent. Pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, the write operation is mapped to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent.08-09-2012
20120216009SOURCE-TARGET RELATIONS MAPPING - A data preservation function is provided which, in one embodiment, includes mapping in a plurality of maps for a target storage device, map extent ranges of each map, to corresponding target extent ranges of storage locations on the target storage device. Usage of a particular map extent range by a relationship between a source extent range of storage locations on a source storage device containing data to be preserved in the source extent range, and the target extent range mapped to the map particular extent range, may be indicated by the map. In another aspect, in response to receipt of a data preservation command, a data preservation operation is performed including determining whether a map indicates availability of a map extent range mapped to the identified target extent range. Upon determining that a particular map indicates availability of a map extent range mapped to the identified target extent range, a relationship between the identified source extent range and the identified target extent range is established. In yet another aspect, upon determining that no map indicates availability of a map extent range mapped to the identified target extent range, establishing of a relationship between the identified source extent range and the identified target extent range may be delayed until it is determined that a particular map indicates availability of a map extent range mapped to the identified target extent range. Other features and aspects may be realized, depending upon the particular application.08-23-2012
20120221823MULTIPLE INCREMENTAL VIRTUAL COPIES - Provided are techniques for, in response to establishing each incremental virtual copy from a source to a target, creating a target change recording structure for the target. While performing destage to a source data block at the source, it is determined that there is at least one incremental virtual copy target for this source data block. For each incremental virtual copy relationship where the source data block is newer than the incremental virtual copy relationship and an indicator is set in a target inheritance structure on the target for a corresponding target data block, the source data block is copied to each corresponding target data block, and an indicator is set in each target change recording structure on each target for the target data block corresponding to the source data block being destaged.08-30-2012
20120233121DELETING RELATIONS BETWEEN SOURCES AND SPACE-EFFICIENT TARGETS IN MULTI-TARGET ARCHITECTURES - A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple space-efficient (SE) targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A space-efficient (SE) target associated with the relation is then identified. A mapping structure maps data in logical tracks of the SE target to physical tracks of a repository. The method then identifies a sibling SE target that inherits data from the SE target. Once the SE target and the sibling SE target are identified, the method modifies the mapping structure to map the data in the physical tracks of the repository to the logical tracks of the sibling SE target. The relation is then deleted between the source and the SE target. A corresponding computer program product is also described herein.09-13-2012
20120233136DELETING RELATIONS BETWEEN SOURCES AND SPACE-EFFICIENT TARGETS IN MULTI-TARGET ARCHITECTURES - A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple space-efficient (SE) targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A space-efficient (SE) target associated with the relation is then identified. A mapping structure maps data in logical tracks of the SE target to physical tracks of a repository. The method then identifies a sibling SE target that inherits data from the SE target. Once the SE target and the sibling SE target are identified, the method modifies the mapping structure to map the data in the physical tracks of the repository to the logical tracks of the sibling SE target. The relation is then deleted between the source and the SE target.09-13-2012
20120233404DELETING RELATIONS IN MULTI-TARGET, POINT-IN-TIME-COPY ARCHITECTURES WITH DATA DEDUPLICATION - A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A target associated with the relation is then identified. The method then identifies a sibling target that inherits data from the target. Once the target and the sibling target are identified, the method copies the data from the target to the sibling target. The relation between the source and the target is then deleted. A corresponding computer program product is also disclosed and claimed herein.09-13-2012
20120233408INTELLIGENT WRITE CACHING FOR SEQUENTIAL TRACKS - Write caching for sequential tracks is performed by a processor device in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation.09-13-2012
20120233416MULTI-TARGET, POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a source volume in a multi-target architecture is described. The multi-target architecture includes a source volume and multiple target volumes mapped thereto. In one embodiment, such a method includes copying data in a track of the source volume to a corresponding track of a target volume (target x). The method enables one or more sibling target volumes (siblings) mapped to the source volume to inherit the data from the target x. When the data is successfully copied to the target x, the method performs a write to the track of the source volume. Other methods for reading and writing data to volumes in the multi-target architecture are also described.09-13-2012
20120233421CYCLIC POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cyclic point-in-time-copy architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume. The method then determines whether the target bit maps (TBMs) of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Once the HS volume is found, the method determines whether the HS volume and the child volume are the same volume. If the HS volume and the child volume are not the same volume, the method copies the data from the HS volume to the child volume. The method then performs the write to the volume x.09-13-2012
20120233429CASCADED, POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cascaded architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume, wherein each of the volume x and the child volume have a target bit map (TBM) associated therewith. The method then determines whether the TBMs of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Finding the HS volume includes travelling up the cascaded architecture until the source of the data is found. Once the HS volume is found, the method copies the data from the HS volume to the child volume and performs the write to the volume x. A method for performing a read is also disclosed herein.09-13-2012
20120233430CYCLIC POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cyclic point-in-time-copy architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume. The method then determines whether the target bit maps (TBMs) of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Once the HS volume is found, the method determines whether the HS volume and the child volume are the same volume. If the HS volume and the child volume are not the same volume, the method copies the data from the HS volume to the child volume. The method then performs the write to the volume x. A corresponding computer program product is also described.09-13-2012
20120254122NEAR CONTINUOUS SPACE-EFFICIENT DATA PROTECTION - A method for providing rolling continuous data protection of source data is disclosed. In one embodiment, such a method includes enabling a user to select source data and establish a first interval when point-in-time copies of the source data are generated. The method further enables the user to specify a first number of point-in-time copies to retain at the first interval. The method further enables the user to specify a second number of point-in-time copies to retain at a second interval, wherein the second interval is a (n≧2) multiple of the first interval. The method further enables the user to specify a third number of point-in-time copies to retain at a third interval, wherein the third interval is a (n≧2) multiple of the second interval. A corresponding apparatus and computer program product are also disclosed.10-04-2012
20120254539SYSTEMS AND METHODS FOR MANAGING CACHE DESTAGE SCAN TIMES - A system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time.10-04-2012
20120254544SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - A system includes a cache partitioned into multiple ranks configured to store multiple storage tracks and a processor coupled to the cache. The processor is configured to perform the following method. One method includes allocating an amount of storage space in the cache to each rank and monitoring a current amount of storage space used by each rank with respect to the amount of storage space allocated to each respective rank. The method further includes destaging storage tracks from each rank until the current amount of storage space used by each respective rank is equal to a predetermined minimum amount of storage space with respect to the amount of storage space allocated to each rank.10-04-2012
20120254545SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - A system includes a write cache configured to store a plurality of storage tracks and configured to be coupled to one or more hosts, and a processor coupled to the write cache. The processor includes code that, when executed by the processor, causes the processor to perform the method below. One method includes monitoring the write cache for write operations from the host(s) and determining if the host(s) is/are idle based on monitoring the write cache for write operations from the host(s). The storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle.10-04-2012
20120254547MANAGING METADATA FOR DATA IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for managing metadata for data in a copy relationship copied from a source storage to a target storage. Information is maintained on a copy relationship of source data in the source storage and target data in the target storage. The source data is copied from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship. Target metadata is generated for the target data comprising the source data copied to the cache. An access request to requested target data comprising the target data in the cache is processed and access is provided to the requested target data in the cache. A determination is made as to whether the requested target data in the cache has been destaged to the target storage. The target metadata for the requested target data in the target storage is discarded in response to determining that the requested target data in the cache has not been destaged to the target storage.10-04-2012
20120260043FABRICATING KEY FIELDS - Exemplary methods, computer systems, and computer program products for fabricating key fields by a processor device in a computer environment are provided. In one embodiment, the computer environment is configured for, as an alternative to reading Count-Key-Data (CKD) data in order to change the key field, providing a hint to fabricate a new key field, thereby overwriting a previous key field and updating the CKD data.10-11-2012
20120260044SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - A system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count.10-11-2012
20120265766COMPRESSION ON THIN PROVISIONED VOLUMES USING EXTENT BASED MAPPING - For facilitating data compression, a set of logical extents, each having compressed logical tracks of data, is mapped to a head physical extent and, if the head physical extent is determined to have been filled, to at least one overflow extent having spatial proximity to the head physical extent. Pursuant to at least one subsequent write operation and destage operation, the at least one subsequent write operation and destage operation determined to be associated with the head physical extent, the write operation is mapped to one of the head physical extent, the at least one overflow extent, and an additional extent having spatial proximity to the at least one overflow extent.10-18-2012
20120265933STRIDE BASED FREE SPACE MANAGEMENT ON COMPRESSED VOLUMES - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks, and a determination is made as to whether all of the one or more tracks can be stored in one selected stride of the plurality of strides. In response to determining that all of the one or more tracks can be stored in the one selected stride, the one or more tracks are written in the one selected stride of the plurality of strides.10-18-2012
20120265934WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.10-18-2012
20120300329MAGNETIC DISK DRIVE USING A NON-VOLATILE STORAGE DEVICE AS CACHE FOR MODIFIED TRACKS - Provided are a computer program product, system, and method for a magnetic disk drive. The disk drive has at least one disk platter having at least one recordable disk surface having an areal density of at least 200 gigabits per square inch. Either a diameter of the at least one disk platter is greater than 3.5 inches or the at least one disk platter rotates at less than 5400 RPMs. A read/write head reads and writes tracks of data with respect to the at least one disk surface. Modified tracks from write requests to write to the at least one disk surface on the at least one disk platter are cached in a non-volatile storage device for caching modified tracks. Modified tracks are cached in the non-volatile storage device to later destage to the at least one disk surface.11-29-2012
20120300336MAGNETIC DISK DRIVE USING A NON-VOLATILE STORAGE DEVICE AS CACHE FOR MODIFIED TRACKS - Provided are a computer program product, system, and method for a magnetic disk drive. The disk drive has at least one disk platter having at least one recordable disk surface having an areal density of at least 200 gigabits per square inch. Either a diameter of the at least one disk platter is greater than 3.5 inches or the at least one disk platter rotates at less than 5400 RPMs. A read/write head reads and writes tracks of data with respect to the at least one disk surface. Modified tracks from write requests to write to the at least one disk surface on the at least one disk platter are cached in a non-volatile storage device for caching modified tracks. Modified tracks are cached in the non-volatile storage device to later destage to the at least one disk surface.11-29-2012
20120303861POPULATING STRIDES OF TRACKS TO DEMOTE FROM A FIRST CACHE TO A SECOND CACHE - Provided are a computer program product, system, and method for populating strides of tracks to demote from a first cache to a second cache. A first cache maintains modified and unmodified tracks from a storage system subject to Input/Output (I/O) requests. A determination is made to demote tracks from the first cache. A determination is made as to whether there are enough tracks ready to demote to form a stride, wherein tracks are written to a second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration. A stride is populated with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride. The stride of tracks, to demote from the first cache, are promoted to the second cache. The tracks in the second cache that are modified are destaged to the storage system.11-29-2012
20120303862CACHING DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE - Provided are a computer program product, system, and method for caching data in a storage system having multiple caches. A sequential access storage device includes a sequential access storage medium and a non-volatile storage device integrated in the sequential access storage device, received modified tracks are cached in the non-volatile storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A spatial index indicates the modified tracks in the non-volatile storage device in an ordering based on their physical location in the sequential access storage medium. The modified tracks are destaged from the non-volatile storage device by comparing a current position of a write head to physical locations of the modified tracks on the sequential access storage medium indicated in the spatial index to select a modified track to destage from the non-volatile storage device to the storage device.11-29-2012
20120303863USING AN ATTRIBUTE OF A WRITE REQUEST TO DETERMINE WHERE TO CACHE DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE - Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition.11-29-2012
20120303864CACHE MANAGEMENT OF TRACKS IN A FIRST CACHE AND A SECOND CACHE FOR A STORAGE - Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device.11-29-2012
20120303869HANDLING HIGH PRIROITY REQUESTS IN A SEQUENTIAL ACCESS STORAGE DEVICE HAVING A NON-VOLATILE STORAGE CACHE - Modified tracks for write requests to a sequential access storage medium in a sequential access storage device are cached in a non-volatile storage, which is a faster access device than the sequential access storage medium. A request queue includes destage requests to destage the modified tracks in the non-volatile storage device to the sequential access storage medium and read requests to access read requested tracks from the sequential access storage medium. A comparison is made of a current position of a read/write mechanism with respect to physical locations on the sequential access storage medium of the tracks subject to the destage requests indicated in the request queue. A determination is made of one of the destage requests to process based on the comparison. The modified track for the determined destage request is written from the non-volatile storage device to the sequential access storage medium.11-29-2012
20120303872CACHE MANAGEMENT OF TRACKS IN A FIRST CACHE AND A SECOND CACHE FOR A STORAGE - Provided a computer program product, system, and method for cache management of tracks in a first cache and a second cache for a storage. The first cache maintains modified and unmodified tracks in the storage subject to Input/Output (I/O) requests. Modified and unmodified tracks are demoted from the first cache. The modified and the unmodified tracks demoted from the first cache are promoted to the second cache. The unmodified tracks demoted from the second cache are discarded. The modified tracks in the second cache that are at proximate physical locations on the storage device are grouped and the grouped modified tracks are destaged from the second cache to the storage device.11-29-2012
20120303875POPULATING STRIDES OF TRACKS TO DEMOTE FROM A FIRST CACHE TO A SECOND CACHE - Provided are a computer program product, system, and method for populating strides of tracks to demote from a first cache to a second cache. A first cache maintains modified and unmodified tracks from a storage system subject to Input/Output (I/O) requests. A determination is made to demote tracks from the first cache. A determination is made as to whether there are enough tracks ready to demote to form a stride, wherein tracks are written to a second cache in strides defined for a Redundant Array of Independent Disk (RAID) configuration. A stride is populated with tracks ready to demote in response to determining that there are enough tracks ready to demote to form the stride. The stride of tracks, to demote from the first cache, are promoted to the second cache. The tracks in the second cache that are modified are destaged to the storage system.11-29-2012
20120303876CACHING DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE - Provided are a computer program product, system, and method for caching data in a storage system having multiple caches. A sequential access storage device includes a sequential access storage medium and a non-volatile storage device integrated in the sequential access storage device, received modified tracks are cached in the non-volatile storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A spatial index indicates the modified tracks in the non-volatile storage device in an ordering based on their physical location in the sequential access storage medium. The modified tracks are destaged from the non-volatile storage device by comparing a current position of a write head to physical locations of the modified tracks on the sequential access storage medium indicated in the spatial index to select a modified track to destage from the non-volatile storage device to the storage device.11-29-2012
20120303877USING AN ATTRIBUTE OF A WRITE REQUEST TO DETERMINE WHERE TO CACHE DATA IN A STORAGE SYSTEM HAVING MULTIPLE CACHES INCLUDING NON-VOLATILE STORAGE CACHE IN A SEQUENTIAL ACCESS STORAGE DEVICE - Provided are a computer program product, system, and method for using an attribute of a write request to determine where to cache data in a storage system having multiple caches including non-volatile storage cache in a sequential access storage device. Received modified tracks are cached in the non-volatile storage device integrated with the sequential access storage device in response to determining to cache the modified tracks. A write request having modified tracks is received. A determination is made as to whether an attribute of the received write request satisfies a condition. The received modified tracks for the write request are cached in the non-volatile storage device in response to determining that the determined attribute does not satisfy the condition. A destage request is added to a request queue for the received write request having the determined attribute not satisfying the condition.11-29-2012
20120303888DESTAGING OF WRITE AHEAD DATA SET TRACKS - Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X).11-29-2012
20120303895HANDLING HIGH PRIORITY REQUESTS IN A SEQUENTIAL ACCESS STORAGE DEVICE HAVING A NON-VOLATILE STORAGE CACHE - Provided are a computer program product, system, and method for handling high priority requests in a sequential access storage device. Received modified tracks for write requests are cached in a non-volatile storage device integrated with the sequential access storage device. A destage request is added to a request queue for a received write request having modified tracks for the sequential access storage medium cached in the non-volatile storage device. A read request indicting a priority is received. A determination is made of a priority of the read request as having a first priority or a second priority. The read request is added to the request queue in response to determining that the determined priority is the first priority. The read request is processed at a higher priority than the read and destage requests in the request queue in response to determining that the determined priority is the second priority.11-29-2012
20120303898MANAGING UNMODIFIED TRACKS MAINTAINED IN BOTH A FIRST CACHE AND A SECOND CACHE - Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.11-29-2012
20120303899MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES - Provided are a computer program product, system, and method for managing track discard requests to include in discard track messages. A backup copy of a track in a cache is maintained in the cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. In response to detecting that a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent indicating the tracks indicated in the queued predetermined number of track discard requests to the cache backup device instructing the cache backup device to discard the tracks indicated in the discard multiple tracks message. In response to determining a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.11-29-2012
20120303904MANAGING UNMODIFIED TRACKS MAINTAINED IN BOTH A FIRST CACHE AND A SECOND CACHE - Provided are a computer program product, system, and method for managing unmodified tracks maintained in both a first cache and a second cache. The first cache has unmodified tracks in the storage subject to Input/Output (I/O) requests. Unmodified tracks are demoted from the first cache to a second cache. An inclusive list indicates unmodified tracks maintained in both the first cache and a second cache. An exclusive list indicates unmodified tracks maintained in the second cache but not the first cache. The inclusive list and the exclusive list are used to determine whether to promote to the second cache an unmodified track demoted from the first cache.11-29-2012
20120324171Apparatus and Method to Copy Data - An apparatus and method for copying data are disclosed. A data track to be replicated using a peer-to-peer remote copy (PPRC) operation is identified. The data track is encoded in a non-transitory computer readable medium disposed in a first data storage system. At a first time, a determination of whether the data track is stored in a data cache is made. At a second time, the data track is replicated to a non-transitory computer readable medium disposed in a second data storage system. The second time is later than the first time. If the data track was stored in the data cache at the first time, a cache manager is instructed to not demote the data track from the data cache. If the data track was not stored in the data cache at the first time, the cache manager is instructed that the data track may be demoted.12-20-2012
20120324173EFFICIENT DISCARD SCANS - Exemplary method, system, and computer program product embodiments for performing a discard scan operation are provided. In one embodiment, by way of example only, a plurality of tracks is examined for meeting criteria for a discard scan. In lieu of waiting for a completion of a track access operation, at least one of the plurality of tracks is marked for demotion. An additional discard scan may be subsequently performed for tracks not previously demoted. The discard and additional discard scans may proceed in two phases. Additional system and computer program product embodiments are disclosed and provide related advantages.12-20-2012
20130007372MANAGEMENT OF WRITE CACHE USING STRIDE OBJECTS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, identifying working data on a stride basis by a processor device are provided. A multi-update bit is established for each of a plurality of strides in a modified cache, wherein the multi-update bit is adapted to indicate a corresponding stride is part of at least one track in a working set that refers to a group of frequently updated tracks. The plurality of strides are scanned based on a schedule to identify tracks for destaging. An operation to destage is performed on a selected track identified during the scanning, if the multi-update bit of a selected stride on the selected track is set to indicate the selected track is part of the working set and if the NVS is about 90% full or greater.01-03-2013
20130024613PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS - Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks.01-24-2013
20130024624PREFETCHING TRACKS USING MULTIPLE CACHES - Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request.01-24-2013
20130024625PREFETCHING TRACKS USING MULTIPLE CACHES - Provided are a computer program product, sequential access storage device, and method for managing data in a sequential access storage device receiving read requests and write requests from a system with respect to tracks stored in a sequential access storage medium. A prefetch request indicates prefetch tracks in the sequential access storage medium to read from the sequential access storage medium. The accessed prefetch tracks are cached in a non-volatile storage device integrated with the sequential access storage device, wherein the non-volatile storage device is a faster access device than the sequential access storage medium. A read request is received for the prefetch tracks following the caching of the prefetch tracks, wherein the prefetch request is designated to be processed at a lower priority than the read request with respect to the sequential access storage medium. The prefetch tracks are returned from the non-volatile storage device to the read request.01-24-2013
20130024626PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage.01-24-2013
20130024627PREFETCHING DATA TRACKS AND PARITY DATA TO USE FOR DESTAGING UPDATED TRACKS - Provided are a computer program product, system, and method for prefetching data tracks and parity data to use for destaging updated tracks. A write request is received including at least one updated track to the group of tracks. The at least one updated track is stored in a first cache device. A prefetch request is sent to the at least one sequential access storage device to prefetch tracks in the group of tracks to a second cache device. A read request is generated to read the prefetch tracks following the sending of the prefetch request. The read prefetch tracks returned to the read request from the second cache device are stored in the first cache device. New parity data is calculated from the at least one updated track and the read prefetch tracks.01-24-2013
20130024628EFFICIENT TRACK DESTAGE IN SECONDARY STORAGE - Exemplary method, system, and computer program product embodiments for efficient track destage in secondary storage in a more effective manner, are provided. In one embodiment, by way of example only, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage. Additional system and computer program product embodiments are disclosed and provide related advantages.01-24-2013
20130031295ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.01-31-2013
20130031297ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.01-31-2013
20130080704MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided.03-28-2013
20130111106PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE05-02-2013
20130111131DYNAMICALLY ADJUSTED THRESHOLD FOR POPULATION OF SECONDARY CACHE05-02-2013
20130111133DYNAMICALLY ADJUSTED THRESHOLD FOR POPULATION OF SECONDARY CACHE05-02-2013
20130111134MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS05-02-2013
20130111146SELECTIVE POPULATION OF SECONDARY CACHE EMPLOYING HEAT METRICS05-02-2013
20130111160SELECTIVE SPACE RECLAMATION OF DATA STORAGE MEMORY EMPLOYING HEAT AND RELOCATION METRICS05-02-2013
20130124803PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship. The point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a first cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request to a second cache device. A read request is generated to read the source track in the source storage following the sending of the prefetch request. The read source track is copied to a corresponding target track in the target storage.05-16-2013
20130132664PERIODIC DESTAGES FROM INSIDE AND OUTSIDE DIAMETERS OF DISKS TO IMPROVE READ RESPONSE TIMES - A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied.05-23-2013
20130132667ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS - A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied.05-23-2013
20130145100MANAGING METADATA FOR DATA IN A COPY RELATIONSHIP - Provided is a method for managing metadata for data in a copy relationship copied from a source storage to a target storage. Information is maintained on a copy relationship of source data in the source storage and target data in the target storage. The source data is copied from the source storage to the cache to copy to target data in the target storage indicated in the copy relationship. Target metadata is generated for the target data comprising the source data copied to the cache. An access request to requested target data comprising the target data in the cache is processed and access is provided to the requested target data in the cache. The target metadata for the requested target data in the target storage is discarded in response to determining that the requested target data in the cache has not been destaged to the target storage.06-06-2013
20130166837DESTAGING OF WRITE AHEAD DATA SET TRACKS - Exemplary methods, computer systems, and computer program products for efficient destaging of a write ahead data set (WADS) track in a volume of a computing storage environment are provided. In one embodiment, the computer environment is configured for preventing destage of a plurality of tracks in cache selected for writing to a storage device. For a track N in a stride Z of the selected plurality of tracks, if the track N is a first WADS track in the stride Z, clearing at least one temporal bit for each track in the cache for the stride Z minus 2 (Z−2), and if the track N is a sequential track, clearing the at least one temporal bit for the track N minus a variable X (N−X).06-27-2013
20130166844STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap.06-27-2013
20130173878SOURCE-TARGET RELATIONS MAPPING - A data preservation function is provided which, in one embodiment, includes indicating by a map, usage of a particular map extent range by a relationship between a source extent range of storage locations on a source storage device containing data to be preserved in the source extent range, and a target extent range mapped to the map particular extent range. In another aspect, in response to receipt of a data preservation command, a data preservation operation is performed including determining whether a map indicates availability of a map extent range mapped to the identified target extent range. Upon determining that a particular map indicates availability of a map extent range mapped to the identified target extent range, a relationship between the identified source extent range and the identified target extent range is established. Other features and aspects may be realized, depending upon the particular application.07-04-2013
20130185476DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING AN OCCUPANCY OF VALID TRACKS IN STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides.07-18-2013
20130185478POPULATING A FIRST STRIDE OF TRACKS FROM A FIRST CACHE TO WRITE TO A SECOND STRIDE IN A SECOND CACHE - Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.07-18-2013
20130185489DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride.07-18-2013
20130185493MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.07-18-2013
20130185494POPULATING A FIRST STRIDE OF TRACKS FROM A FIRST CACHE TO WRITE TO A SECOND STRIDE IN A SECOND CACHE - Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.07-18-2013
20130185495DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride.07-18-2013
20130185497MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.07-18-2013
20130185501CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided are a computer program product, system, and method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted.07-18-2013
20130185502DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE - A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache.07-18-2013
20130185504DEMOTING PARTIAL TRACKS FROM A FIRST CACHE TO A SECOND CACHE - A determination is made of a track to demote from the first cache to the second cache, wherein the track in the first cache corresponds to a track in the storage system and is comprised of a plurality of sectors. In response to determining that the second cache includes a the stale version of the track being demoted from the first cache, a determination is made as to whether the stale version of the track includes track sectors not included in the track being demoted from the first cache. The sectors from the track demoted from the first cache are combined with sectors from the stale version of the track not included in the track being demoted from the first cache into a new version of the track. The new version of the track is written to the second cache.07-18-2013
20130185507WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.07-18-2013
20130185510CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided is a method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted.07-18-2013
20130185512MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS - For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache. Unrequested data of the whole data segment is split and positioned at a Least Recently Used (LRU) portion of the demotion queue of the higher level of cache. The unrequested data is pinned in place until a write of the whole data segment to the lower level of cache completes.07-18-2013
20130185513CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE - In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application.07-18-2013
20130185514CACHE MANAGEMENT OF TRACK REMOVAL IN A CACHE FOR STORAGE - In one embodiment, a cache manager releases a list lock during a scan when a track has been identified as a track for cache removal processing such as demoting the track, for example. By releasing the list lock, other processors have access to the list while the identified track is processed for cache removal. In one aspect, the position of the previous entry in the list may be stored in a cursor or pointer so that the pointer value points to the prior entry in the list. Once the cache removal processing of the identified track is completed, the list lock may be reacquired and the scan may be resumed at the list entry identified by the pointer. Other features and aspects may be realized, depending upon the particular application.07-18-2013
20130191596ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS - A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied.07-25-2013
20130198461MANAGING TRACK DISCARD REQUESTS TO INCLUDE IN DISCARD TRACK MESSAGES - Provided is a method for managing track discard requests. A backup copy of a track in a cache is maintained in a cache backup device. A track discard request is generated to discard tracks in the cache backup device removed from the cache. Track discard requests are queued in a discard track queue. If a predetermined number of track discard requests are queued in the discard track queue while processing in a discard multi-track mode, one discard multiple tracks message is sent to the cache backup device indicating the tracks indicated in the queued predetermined number of track discard requests to instruct the cache backup device to discard the tracks indicated in the discard multiple tracks message. If a predetermined number of periods of inactivity while processing in the discard multi-track mode, processing the track discard requests is switched to a discard single track mode.08-01-2013
20130198751INCREASED DESTAGING EFFICIENCY - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating.08-01-2013
20130198752INCREASED DESTAGING EFFICIENCY - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, destaging tasks are calculated according to one of a standard time interval and a variable recomputed destaging task interval. The destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks is smoothed according to the calculating.08-01-2013
20130205077PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE - For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage.08-08-2013
20130205084STRIDE BASED FREE SPACE MANAGEMENT ON COMPRESSED VOLUMES - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks, and a determination is made as to whether all of the one or more tracks can be stored in one selected stride of the plurality of strides. In response to determining that all of the one or more tracks can be stored in the one selected stride, the one or more tracks are written in the one selected stride of the plurality of strides.08-08-2013
20130205088MULTI-STAGE CACHE DIRECTORY AND VARIABLE CACHE-LINE SIZE FOR TIERED STORAGE ARCHITECTURES - A method in accordance with the invention includes providing first, second, and third storage tiers, wherein the first storage tier acts as a cache for the second storage tier, and the second storage tier acts as a cache for the third storage tier. The first storage tier uses a first cache line size corresponding to an extent size of the second storage tier. The second storage tier uses a second cache line size corresponding to an extent size of the third storage tier. The second cache line size is significantly larger than the first cache line size. The method further maintains, in the first storage tier, a first cache directory indicating which extents from the second storage tier are cached in the first storage tier, and a second cache directory indicating which extents from the third storage tier are cached in the second storage tier.08-08-2013
20130205093MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided.08-08-2013
20130205094EFFICIENT TRACK DESTAGE IN SECONDARY STORAGE - For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, the temporal bits and sequential bits are transferred from the primary storage to the secondary storage. The temporal bits are allowed to age on the secondary storage.08-08-2013
20130205109DATA ARCHIVING USING DATA COMPRESSION OF A FLASH COPY - Embodiments of the disclosure relate to archiving data in a storage system. An exemplary embodiment comprises making a flash copy of data in a source volume, compressing data in the flash copy wherein each track of data is compressed into a set of data pages, and storing the compressed data pages in a target volume. Data extents for the target volume may be allocated from a pool of compressed data extents. After each stride worth of data is compressed and stored in the target volume, data may be destaged to avoid destage penalties. Data from the target volume may be decompressed from a flash copy of the target volume in a reverse process to restore each data track, when the archived data is needed. Data may be compressed and uncompressed using a Lempel-Ziv-Welch process.08-08-2013
20130219124EFFICIENT DISCARD SCANS - A plurality of tracks is examined for meeting criteria for a discard scan. In lieu of waiting for a completion of a track access operation, at least one of the plurality of tracks is marked for demotion. An additional discard scan may be subsequently performed for tracks not previously demoted. The discard and additional discard scans may proceed in two phases.08-22-2013
20130232294ADAPTIVE CACHE PROMOTIONS IN A TWO LEVEL CACHING SYSTEM - Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache.09-05-2013
20130232295ADAPTIVE CACHE PROMOTIONS IN A TWO LEVEL CACHING SYSTEM - Provided are a computer program product, system, and method for managing data in a first cache and a second cache. A reference count is maintained in the second cache for the page when the page is stored in the second cache. It is determined that the page is to be promoted from the second cache to the first cache. In response to determining that the reference count is greater than zero, the page is added to a Least Recently Used (LRU) end of an LRU list in the first cache. In response to determining that the reference count is less than or equal to zero, the page is added to a Most Recently Used (LRU) end of the LRU list in the first cache.09-05-2013
20130235709PERIODIC DESTAGES FROM INSIDE AND OUTSIDE DIAMETERS OF DISKS TO IMPROVE READ RESPONSE TIMES - A storage controller that includes a cache, receives a command from a host, wherein a set of criteria corresponding to read response times for executing the command have to be satisfied. A destage application that destages tracks based at least on recency of usage and spatial location of the tracks is executed, wherein a spatial ordering of the tracks is maintained in a data structure, and the destage application traverses the spatial ordering of the tracks. Tracks are destaged from at least inside or outside diameters of disks at periodic intervals, while traversing the spatial ordering of the tracks, wherein the set of criteria corresponding to the read response times for executing the command are satisfied.09-12-2013
20130246691ADAPTIVE PRESTAGING IN A STORAGE CONTROLLER - In one aspect of the present description, at least one of the value of a prestage trigger and the value of the prestage amount, may be modified as a function of the drive speed of the storage drive from which the units of read data are prestaged into a cache memory. Thus, cache prestaging operations in accordance with another aspect of the present description may take into account storage devices of varying speeds and bandwidths for purposes of modifying a prestage trigger and the prestage amount. Other features and aspects may be realized, depending upon the particular application.09-19-2013
20130304968DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING AN OCCUPANCY OF VALID TRACKS IN STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information is maintained on strides configured in a second cache and occupancy counts for the strides indicating an extent to which the strides are populated with valid tracks and invalid tracks. A determination is made of tracks to demote from a first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are to a second stride in the second cache having an occupancy count indicating the stride is empty. A determination is made of a target stride in the second cache based on the occupancy counts of the strides in the second cache. A determination is made of at least two source strides in the second cache having valid tracks based on the occupancy counts of the strides in the second cache. The target stride is populated with the valid tracks from the source strides.11-14-2013
20130332645SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache.12-12-2013
20130332646PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations with respect to the area of the cache.12-12-2013
20140047187ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.02-13-2014
20140068163PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache.03-06-2014
20140068189ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.03-06-2014
20140068191SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache.03-06-2014
20140075110REPLICATING TRACKS FROM A FIRST STORAGE SITE TO A SECOND AND THIRD STORAGE SITES - Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage.03-13-2014
20140075114REPLICATING TRACKS FROM A FIRST STORAGE SITE TO A SECOND AND THIRD STORAGE SITES - Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage.03-13-2014
20140082256RECOVERY FROM CACHE AND NVS OUT OF SYNC - For cache/data management in a computing storage environment, incoming data segments into a Non Volatile Storage (NVS) device of the computing storage environment are validated against a bitmap to determine if the incoming data segments are currently in use. Those of the incoming data segments determined to be currently in use are designated to the computing storage environment to protect data integrity.03-20-2014
20140082277EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited.03-20-2014
20140082283EFFICIENT CACHE VOLUME SIT SCANS - A processor, operable in a computing storage environment, allocates portions of a Scatter Index Table (SIT) disproportionately between a larger portion dedicated for meta data tracks, and a smaller portion dedicated for user data tracks, and processes a storage operation through the disproportionately allocated portions of the SIT using an allocated number of Task Control Blocks (TCB).03-20-2014
20140082294MANAGEMENT OF DESTAGE TASKS WITH LARGE NUMBER OF RANKS - A processor, operable in a computing storage environment, for each rank in a storage management device in the computing storage environment, allocates a lower maximum count, and a higher maximum count, of Task Control Blocks (TCBs) to be implemented for performing a storage operation, and performs the storage operation using up to the lower maximum count of TCBs, yet only allows those TCBs above the lower maximum count to be allocated for performing the storage operation satisfying at least one criterion.03-20-2014
20140082303MANAGEMENT OF DESTAGE TASKS WITH LARGE NUMBER OF RANKS - A processor, operable in a computing storage environment, for each rank in a storage management device in the computing storage environment, allocates a lower maximum count, and a higher maximum count, of Task Control Blocks (TCBs) to be implemented for performing a storage operation, and performs the storage operation using up to the lower maximum count of TCBs, yet only allows those TCBs above the lower maximum count to be allocated for performing the storage operation satisfying at least one criterion.03-20-2014
20140082631PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group. Upon a determined imbalance between dispatch queue depths greater than a predetermined threshold, the set of like tasks is reassigned to an additional group.03-20-2014
20140095762FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A system for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of incremented and decremented per each write and discard storage operation, while the second, fuzzy, group is one of incremented and decremented on a more infrequent basis as compared to the first, accurate group.04-03-2014
20140095763NVS THRESHOLDING FOR EFFICIENT DATA MANAGEMENT - For data management by a processor device in a computing storage environment, a threshold for an amount of Non Volatile Storage (NVS) space to be consumed by any particular logically contiguous storage space in the computing storage environment is established based on at least one of a Redundant Array of Independent Disks (RAID) type, a number of point-in-time copy source data segments in the logically contiguous storage space, and a storage classification.04-03-2014
20140095787NVS THRESHOLDING FOR EFFICIENT DATA MANAGEMENT - For data management by a processor device in a computing storage environment, a threshold for an amount of Non Volatile Storage (NVS) space to be consumed by any particular logically contiguous storage space in the computing storage environment is established based on at least one of a Redundant Array of Independent Disks (RAID) type, a number of point-in-time copy source data segments in the logically contiguous storage space, and a storage classification.04-03-2014
20140095811FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A system for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of incremented and decremented per each write and discard storage operation, while the second, fuzzy, group is one of incremented and decremented on a more infrequent basis as compared to the first, accurate group.04-03-2014
20140122808PREFETCHING SOURCE TRACKS FOR DESTAGING UPDATED TRACKS IN A COPY RELATIONSHIP - A point-in-time copy relationship associates tracks in a source storage with tracks in a target storage. The target storage stores the tracks in the source storage as of a point-in-time. A point-in-time copy relationship associates tracks in the source storage with tracks in the target storage, wherein the target storage stores the tracks in the source storage as of a point-in-time. A write request is received including an updated source track for a point-in-time source track in the source storage in the point-in-time copy relationship, wherein the point-in-time source track was in the source storage at the point-in-time the copy relationship was established. The updated source track is stored in a cache device. A prefetch request is sent to the source storage to prefetch the point-in-time source track in the source storage subject to the write request before destaging the updated source track to the source storage.05-01-2014
20140136790SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - A system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count.05-15-2014
20140156936SYSTEMS AND METHODS FOR MANAGING DESTAGE CONFLICTS - Destaging storage tracks from each rank that includes a greater than a predetermined percentage of a predetermined amount of storage space with respect to a current amount of storage space allocated to each rank until the current amount of storage space used by each respective rank is equal to the predetermined percentage of the predetermined amount of storage space. The destage storage tracks are declined from being destaged from each rank that includes less than or equal to the predetermined percentage of the predetermined amount of storage space rank.06-05-2014
20140156937SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Storage tracks are destaged from the write cache if the host(s) is/are idle and are not destaged from the write cache if one or more of the hosts is/are not idle. The storage tracks are refrained from being destaged from the write cache if the at least one host is not idle. Each rank is monitored for write operations from the at least one host, and a determination is made if the at least one host is idle with respect to each respective rank based on monitoring each rank for write operations from the at least one host such that the at least one host may be determined to be idle with respect to a first rank and not idle with respect to a second rank.06-05-2014
20140201448MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS - For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache. Requested data of the whole data segment is split and positioned at a Most Recently Used (MRU) portion of a demotion queue of the higher level of cache.07-17-2014
20140207995USE OF DIFFERING GRANULARITY HEAT MAPS FOR CACHING AND MIGRATION - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a Solid State Drive (SSD) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache.07-24-2014
20140207999PERFORMING STAGING OR DESTAGING BASED ON THE NUMBER OF WAITING DISCARD SCANS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed.07-24-2014
20140208017THINLY PROVISIONED FLASH CACHE WITH SHARED STORAGE POOL - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments.07-24-2014
20140208018TIERED CACHING AND MIGRATION IN DIFFERING GRANULARITIES - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage, clumped hot ones of the groups of data segments are migrated to use the SSD portion while using the lower-speed cache for a remaining portion of the clumped hot ones, and sparsely hot ones of the groups of data segments are migrated to use the lower-speed cache while using a lower one of the tiered levels of storage for a remaining portion of the sparsely hot ones.07-24-2014
20140208020USE OF DIFFERING GRANULARITY HEAT MAPS FOR CACHING AND MIGRATION - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to utilize a Solid State Drive (SSD) portion of the tiered levels of storage, while sparsely hot ones of the groups of data segments are migrated to utilize the lower-speed cache.07-24-2014
20140208021THINLY PROVISIONED FLASH CACHE WITH SHARED STORAGE POOL - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments.07-24-2014
20140208029USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target.07-24-2014
20140208032USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, and reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target.07-24-2014
20140208036PERFORMING STAGING OR DESTAGING BASED ON THE NUMBER OF WAITING DISCARD SCANS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed.07-24-2014
20140304479GROUPING TRACKS FOR DESTAGING - Tracks are selected for destaging from a least recently used (LRU) list and the selected tracks are moved to a destaging wait list. The selected tracks are grouped and destaged from the destaging wait list.10-09-2014
20140351532MINIMIZING DESTAGING CONFLICTS - Destage grouping of tracks is restricted to a bottom portion of a least recently used (LRU) list without grouping the tracks at a most recently used end of the LRU list to avoid the destaging conflicts. The destage grouping of tracks is destaged from the bottom portion of the LRU list.11-27-2014
20140365718DEMOTING TRACKS FROM A FIRST CACHE TO A SECOND CACHE BY USING A STRIDE NUMBER ORDERING OF STRIDES IN THE SECOND CACHE TO CONSOLIDATE STRIDES IN THE SECOND CACHE - Information on strides configured in the second cache includes information indicating a number of valid tracks in the strides, wherein a stride has at least one of valid tracks and free tracks not including valid data. A determination is made of tracks to demote from the first cache. A first stride is formed including the determined tracks to demote. The tracks from the first stride are added to a second stride in the second cache that has no valid tracks. A target stride in the second cache is selected based on a stride most recently used to consolidate strides from at least two strides into one stride. Data from the valid tracks is copied from at least two source strides in the second cache to the target stride.12-11-2014
20150019810WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.01-15-2015
20150026409DEFERRED RE-MRU OPERATIONS TO REDUCE LOCK CONTENTION - Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis. A global lock for the set of operations is periodically acquired, the set of operations is performed, and the global lock is freed so as to avoid excessive duty cycling of lock and unlock operations in the computing storage environment.01-22-2015
20150032957WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.01-29-2015
20150040135THRESHOLDING TASK CONTROL BLOCKS FOR STAGING AND DESTAGING - For thresholding task control blocks (TCBs) for staging and destaging, a first tier of TCBs are reserved for guaranteeing a minimum number of TCBs for staging and destaging for storage ranks An additional number of requested TCBs are apportioned from a second tier of TCBs to each of the storage ranks based on a scaling factor that is calculated at predefined time intervals.02-05-2015
20150046649MANAGING CACHING OF EXTENTS OF TRACKS IN A FIRST CACHE, SECOND CACHE AND STORAGE - Provided are a computer program product, system, and method for managing caching of extents of tracks in a first cache, second cache and storage device. A determination is made of an eligible track in a first cache eligible for demotion to a second cache, wherein the tracks are stored in extents configured in a storage device, wherein each extent is comprised of a plurality of tracks. A determination is made of an extent including the eligible track and whether second cache caching for the determined extent is enabled or disabled. The eligible track is demoted from the first cache to the second cache in response to determining that the second cache caching for the determined extent is enabled. Selection is made not to demote the eligible track in response to determining that the second cache caching for the determined extent is disabled.02-12-2015
20150052529EFFICIENT TASK SCHEDULING USING A LOCKING MECHANISM - For efficient task scheduling using a locking mechanism, a new task is allowed to spin on the locking mechanism if a number of tasks spinning on the locking mechanism is less than a predetermined threshold for parallel operations requiring locks between the multiple threads.02-19-2015
20150058560WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.02-26-2015
20150058561INCREASED DESTAGING EFFICIENCY FOR SMOOTHING OF DESTAGE TASKS BASED ON SPEED OF DISK DRIVES - For increased destaging efficiency by smoothing destaging tasks to reduce long input/output (I/O) read operations in a computing environment, the ramp up of the destaging tasks is adjusted based on speed of disk drives when smoothing the destaging of storage tracks between a desired number of destaging tasks and a current number of destaging tasks by calculating destaging tasks according to one of a standard time interval and a variable recomputed destaging task interval.02-26-2015
20150095561PROMOTION OF PARTIAL DATA SEGMENTS IN FLASH CACHE - For efficient track destage in secondary storage in a more effective manner, for temporal bits employed with sequential bits for controlling the timing for destaging the track in a primary storage, a preference of movement to lower speed cache level is implemented based on at least one of an amount of holes and a data heat metric. If a first bit has at least one of a lower amount of holes and a hotter data heat metric, it is moved to the lower speed cache level ahead of a second bit that has at least one of a higher amount of holes and a cooler data heat. If the first bit has a hotter data heat and greater than a predetermined number of holes, the first bit is discarded.04-02-2015
20150121007ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress.04-30-2015
20150127904ASSIGNING DEVICE ADAPTORS TO USE TO COPY SOURCE EXTENTS TO TARGET EXTENTS IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for assigning device adaptors to use to copy source extents in source ranks to target extents in target ranks in a copy relation. A determination is made of an order of the target ranks in the copy relation. Target ranks in the copy relation are selected according to the determined order. For each selected target rank, indication is made in a device adaptor assignment data structure of a source device adaptor and target device adaptor of the device adaptors to use to copy the source rank to the selected target rank indicated in the copy relation, wherein indication is made for the selected target ranks according to the determined order. The source ranks are copied to the selected target ranks using the source and target device adaptors indicated in the device adaptor assignment data structure.05-07-2015
20150127913EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations.05-07-2015
20150134878NONVOLATILE STORAGE THRESHOLDING FOR ULTRA-SSD, SSD, AND HDD DRIVE INTERMIX - Embodiments for efficient thresholding of nonvolatile storage (NVS) for a plurality of types of storage rank groups by a processor. Target storage devices are determined in a pool of target storage devices as one of a hard disk drive (HDD) and a solid-state drive (SSD) device. Each target storage device classified into an SSD rank group, a Nearline rank group, an Enterprise rank group, and an Ultra-SSD rank group in the pool of target storage devices. The Nearline rank group and the Enterprise rank group comprise a HDD rank group, and the Nearline rank group, the Enterprise rank group, and the SSD rank group comprise the Non-Ultra-SSD rank group. Thresholds are adjusted for preventing space allocation in the NVS for at least one of the classified target storage devices based on one of the presence and absence of identified types of the classified target storage devices.05-14-2015
20150134914DESTAGE GROUPING FOR SEQUENTIAL FAST WRITE TRACKS - An amount of sequential fast write (SFW) Tracks are metered by providing an adjustable threshold for performing a destage scan that moves the SFW tracks from a SFW least recently used (LRU) list to a destaging wait list (DWL). Priorities are set for the destaging of the SFW tracks from the DWL.05-14-2015
20150227323USE OF FLASH CACHE TO IMPROVE TIERED MIGRATION PERFORMANCE - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, and at a time in which at least one data segment is to be migrated from one level to another level of the tiered levels of storage, a data migration mechanism is initiated by copying data resident in the lower-speed cache corresponding to the at least one data segment to be migrated to a target on the another level, reading remaining data, not previously copied from the lower-speed cache, from a source on the one level, and writing the remaining data to the target, and changing a logical address of the at least one data segment to point to the target.08-13-2015
20150227455MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided.08-13-2015
20150227467TIERED CACHING AND MIGRATION IN DIFFERING GRANULARITIES - For data processing in a computing storage environment by a processor device, the environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that clumped uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage; uniformly hot groups of data segments are determined using a first, largest granulated, heat map for a selected one of the group of the data segments; a second heat map, which is smaller than the first and having the largest granularity of the first heat map, is used to determine the clumped hot groups; and sparsely hot groups are determined when neither the first heat map nor the second heat map are hotter than the first and second predetermined thresholds, respectively.08-13-2015
20150227487EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks is initiated, and the at least one complete data track is removed off of a free list by a first I/O waiter.08-13-2015
20150242125EFFICIENT FREE-SPACE MANAGEMENT OF MULTI-TARGET PEER-TO-PEER REMOTE COPY (PPRC) MODIFIED SECTORS BITMAP IN BIND SEGMENTS - For efficient free-space management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap in bind segments, maintaining a list of bind segments having a free slots for each storage volume. Each one of the bind segments includes a bitmap of the free slots. Those of the bind segments having more than an predetermined number of the free slots are freed.08-27-2015
20150242126EFFICIENT CACHE MANAGEMENT OF MULTI-TARGET PEER-TO-PEER REMOTE COPY (PPRC) MODIFIED SECTORS BITMAP - For efficient cache management of multi-target peer-to-peer remote copy (PPRC) modified sectors bitmap in a computing storage environment a multiplicity of PPRC modified sectors bitmaps are dynamically managed by placing the multiplicity of PPRC modified sectors bitmaps into slots of bind segments.08-27-2015
20150242127OPTIMIZING PEER-TO-PEER REMOTE COPY (PPRC) TRANSFERS FOR PARTIAL WRITE OPERATIONS - For optimizing peer-to-peer remote copy (PPRC) transfers for partial write operations in a computing storage environment by a processor device by maintaining a PPRC modified sectors bitmap in bind segments upon demoting a track out of a cache for transferring a partial track after the demoting the track, wherein a hash table is used for locating the PPRC modified sectors bitmap.08-27-2015
20150242316ASYNCHRONOUS CLEANUP AFTER A PEER-TO-PEER REMOTE COPY (PPRC) TERMINATE RELATIONSHIP OPERATION - For asynchronous cleanup after a peer-to-peer remote copy (PPRC) terminate relationship operation in a computing storage environment by a processor device, asynchronously cleaning up a plurality of PPRC modified sectors bitmaps using a PPRC terminate-relationship cleanup operation by throttling a number of tasks performing the PPRC terminate-relationship cleanup operation while releasing a plurality of bind segments until completion of the PPRC terminate-relationship cleanup operation.08-27-2015
20150248239CASCADED, POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cascaded architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume, wherein each of the volume x and the child volume have a target bit map (TBM) associated therewith. The method then determines whether the TBMs of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Finding the HS volume includes comparing ages of mapping relationships upstream from the volume x in order to determine a source of the data. Once the HS volume is found, the method copies the data from the HS volume to the child volume and performs the write to the volume x. A method for performing a read is also disclosed herein.09-03-2015
20150261440ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.09-17-2015
20150261441ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.09-17-2015
20150261453GROUPING OF TRACKS FOR COPY SOURCE TO TARGET DESTAGE ON GLOBAL MIRROR SECONDARY - For performing efficient full-stride copy source-to-target operations in a computing storage environment by a processor device, pursuant to a destage operation, a determination is made whether to destage a full stride or one track of data on a target volume by comparing a counted number of modified tracks for the full stride against a predetermined threshold.09-17-2015
20150261678MANAGING SEQUENTIALITY OF TRACKS FOR ASYNCHRONOUS PPRC TRACKS ON SECONDARY - For performing efficient management of tracks in an asynchronous Peer-to-Peer Redundant Copy (PPRC) operation in a computing storage environment, a correct status of a sequential bit is determined by performing one of: (1) examining a primary cache, where if data being transferred pursuant to the PPRC operation in a primary track remains in the primary cache, the sequential bit setting found therein is used, and (2) an Out-Of-Sync (OOS) bitmap is examined to determine if the sequential bit is set.09-17-2015
20150261685SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Storage tracks from at least one host are destaged from the write cache rank when it is determined that the at least one host is idle with respect to a first set of ranks, and storage tracks are refrained from being destaged from each rank when it is determined that the at least one host is not idle with respect to a second set of ranks such that storage tracks in the first set of ranks may be destaged while storage tracks in the second set of ranks are not being destaged.09-17-2015
20150261689SYSTEMS AND METHODS FOR BACKGROUND DESTAGING STORAGE TRACKS - Storage tracks from at least one server are destaged from the write cache rank when it is determined that the at least one server is idle with respect to a first set of ranks, and storage tracks are refrained from being destaged from each rank when it is determined that the at least one server is not idle with respect to a second set of ranks such that storage tracks in the first set of ranks may be destaged while storage tracks in the second set of ranks are not being destaged.09-17-2015
20150268887WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.09-24-2015
20150268892WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.09-24-2015
20150278115WRITING ADJACENT TRACKS TO A STRIDE, BASED ON A COMPARISON OF A DESTAGING OF TRACKS TO A DEFRAGMENTATION OF THE STRIDE - Compressed data is maintained in a plurality of strides of a redundant array of independent disks, wherein a stride is configurable to store a plurality of tracks. A request is received to write one or more tracks. The one or more tracks are written to a selected stride of the plurality of strides, based on comparing the number of operations required to destage selected tracks from the selected stride to the number of operations required to defragment the compressed data in the selected stride.10-01-2015
20150286418TIERED CACHING AND MIGRATION IN DIFFERING GRANULARITIES - For data processing in a distributed computing storage environment by a processor device, the distributed computing environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, groups of data segments and clumped hot ones of the data segments are migrated between the tiered levels of storage such that uniformly hot ones of the groups of data segments are migrated to use a Solid State Drive (SSD) portion of the tiered levels of storage; uniformly hot groups of data segments are determined using a first, largest granulated, heat map for a selected one of the group of the data segments; and a second heat map, which is smaller than the first and having the largest granularity of the first heat map, is used to determine the clumped hot groups.10-08-2015
20150286572EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - Various embodiments for cache management in a distributed computing storage environment are provided. In one embodiment, a processor device, for a plurality of input/output (I/O) operations, initiates a process, separate from a process responsible for data segment assembly, for waking a predetermined number of waiting I/O operations.10-08-2015
20150286580MANAGEMENT OF PARTIAL DATA SEGMENTS IN DUAL CACHE SYSTEMS - For movement of partial data segments within a computing storage environment having lower and higher levels of cache by a processor, a whole data segment containing one of the partial data segments is promoted to both the lower and higher levels of cache, including considering an Input/Output Performance (IOP) metric, a bandwidth metric, and a garbage collection metric, and a whole data segment is promoted containing the one of the partial data segments to both the lower and higher levels of cache.10-08-2015
20150301855PREFERENTIAL CPU UTILIZATION FOR TASKS - In a distributed server storage environment, a set of like tasks to be performed is organized into a first group, and a last used processing group associated with the like tasks is stored. Upon a subsequent dispatch, the last used processing group is compared to other processing groups and the tasks are assigned to a processing group based upon a predetermined threshold.10-22-2015
20150301862PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group, and a last used processing group associated with the like tasks is stored. Upon a subsequent dispatch, the last used processing group is compared to other processing groups and the tasks are assigned to a processing group based upon a predetermined threshold.10-22-2015
20150309747REPLICATING TRACKS FROM A FIRST STORAGE SITE TO A SECOND AND THIRD STORAGE SITES - Provided are a computer program product, system, and method for replicating tracks from a first storage to a second and third storages. A determination is made of a track in the first storage to transfer to the second storage as part of a point-in-time copy relationship and of a stride of tracks including the target track. The stride of tracks including the target track is staged from the first storage to a cache according to the point-in-time copy relationship. The staged stride is destaged from the cache to the second storage. The stride in the cache is transferred to the third storage as part of a mirror copy relationship. The stride of tracks in the cache is demoted in response to destaging the stride of the tracks in the cache to the second storage and transferring the stride of tracks in the cache to the third storage.10-29-2015
20150331614USING QUEUES CORRESPONDING TO ATTRIBUTE VALUES ASSOCIATED WITH UNITS OF WORK AND SUB-UNITS OF THE UNIT OF WORK TO SELECT THE UNITS OF WORK AND THEIR SUB-UNITS TO PROCESS - Provided are a computer program product, system, and method for using queues corresponding to attribute values associated with units of work and sub-units of the unit of work to select the units of work and their sub-units to process. There are a plurality of work unit queues, each associated with different work unit attribute values that are associated with units of work, wherein the work unit queues include records for units of work to process having work unit attribute values associated with the work unit attribute values of the work unit queues. There are a plurality of work sub-unit queues, wherein each are associated with different work sub-unit attribute values that are associated with sub-units of work. Records are added for work sub-units of a unit of work to the work sub-unit queues, and records are selected from the work sub-unit queues to process the sub-units of work.11-19-2015
20150331710USING QUEUES CORRESPONDING TO ATTRIBUTE VALUES ASSOCIATED WITH UNITS OF WORK TO SELECT THE UNITS OF WORK TO PROCESS - Provided are a computer program product, system, and method for using queues corresponding to attribute values associated with units of work to select the units of work to process. A plurality of queues for each of a plurality of attribute types of attributes are associated with the units of work to process, wherein there are queues for different possible attribute values for each of the attribute types. A unit of work to process is received. A determination is made for each of the attribute types at least one of the queues corresponding to at least one attribute value for the attribute type associated with the received unit of work. A record for the received unit of work is added to each of the determined queues.11-19-2015
20150331712CONCURRENTLY PROCESSING PARTS OF CELLS OF A DATA STRUCTURE WITH MULTIPLE PROCESSES - Provided are a computer program product, system, and method for concurrently processing parts of cells of a data structure with multiple processes. Information is provided to indicate a partitioning of the cells of the data structure into a plurality of parts, and having a cursor pointing to a cell in the part. Processes concurrently process different parts of the data structure by performing: determining from the cursor for the part one of the cells in the part to process; processing the cells from the cursor to determine whether to process the unit of work corresponding to the cell; and setting the cursor to identify one of the cells from which processing is to continue in a subsequent iteration in response to processing the units of work for a plurality of the processed cells.11-19-2015
20150331716USING QUEUES CORRESPONDING TO ATTRIBUTE VALUES AND PRIORITIES ASSOCIATED WITH UNITS OF WORK AND SUB-UNITS OF THE UNIT OF WORK TO SELECT THE UNITS OF WORK AND THEIR SUB-UNITS TO PROCESS - Provided are a computer program product, system, and method for using queues corresponding to attribute values and priorities associated with units of work and sub-units of the unit of work to select the units of work and their sub-units to process. There are a plurality of work unit queues, wherein each of the work unit queues are associated with different work unit attribute values that are associated with units of work, wherein a plurality of the work unit queues include records for units of work to process having work unit attribute values associated with the work unit attribute values of the work unit queues, and wherein the work unit queues are each associated with a different priority. A record for a unit of work to perform is added to the work unit queue associated with a priority and work unit attribute value associated with the work unit.11-19-2015
20150339074ASSIGNING DEVICE ADAPTORS TO USE TO COPY SOURCE EXTENTS TO TARGET EXTENTS IN A COPY RELATIONSHIP - Provided are a computer program product, system, and method for assigning device adaptors to use to copy source extents in source ranks to target extents in target ranks in a copy relation. A determination is made of an order of the target ranks in the copy relation. Target ranks in the copy relation are selected according to the determined order. For each selected target rank, indication is made in a device adaptor assignment data structure of a source device adaptor and target device adaptor of the device adaptors to use to copy the source rank to the selected target rank indicated in the copy relation, wherein indication is made for the selected target ranks according to the determined order. The source ranks are copied to the selected target ranks using the source and target device adaptors indicated in the device adaptor assignment data structure.11-26-2015
20150339182FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A system for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of updated on a per operation basis, while the second, fuzzy, group is one of updated on a more infrequent basis as compared to the first, accurate group.11-26-2015
20150347318THINLY PROVISIONED FLASH CACHE WITH SHARED STORAGE POOL - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and managed tiered levels of storage, a Solid State Device (SSD) tier is variably shared between the lower-speed cache and the managed tiered levels of storage such that the managed tiered levels of storage are operational on large data segments, and the lower-speed cache is allocated with the large data segments, yet operates with data segments of a smaller size than the large data segments and within the large data segments, where if selected data segments are cached in the lower-speed cache and are determined to become uniformly hot, the selected group from the lower-speed cache are migrated to the SSD tier.12-03-2015
20150378909PERFORMING STAGING OR DESTAGING BASED ON THE NUMBER OF WAITING DISCARD SCANS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether more than a threshold number of discard scans are waiting to be performed. The controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache, in response to determining that more than the threshold number of discard scans are waiting to be performed.12-31-2015
20150378929SYNCHRONOUS AND ANSYNCHRONOUS DISCARD SCANS BASED ON THE TYPE OF CACHE MEMORY - A computational device maintains a first type of cache and a second type of cache. The computational device receives a command from the host to release space. The computational device synchronously discards tracks from the first type of cache, and asynchronously discards tracks from the second type of cache.12-31-2015
20160004456SELECTIVE SPACE RECLAMATION OF DATA STORAGE MEMORY EMPLOYING HEAT AND RELOCATION METRICS - Space of a data storage memory of a data storage memory system is reclaimed by determining heat metrics of data stored in the data storage memory; determining relocation metrics related to relocation of the data within the data storage memory; determining utility metrics of the data relating the heat metrics to the relocation metrics for the data; and making the data whose utility metric fails a utility metric threshold, available for space reclamation.01-07-2016
20160026578USE OF DIFFERING GRANULARITY HEAT MAPS FOR CACHING AND MIGRATION - For data processing in a computing storage environment by a processor device, the computing storage environment incorporating at least high-speed and lower-speed caches, and tiered levels of storage, groups of data segments are migrated between the tiered levels of storage such that if a selected group is cached in the lower-speed cache and is determined to become uniformly hot, migrating the selected group from the lower-speed cache to the SSD portion while refraining from processing data retained in the lower-speed cache until the selected group is fully migrated to the SSD portion.01-28-2016
20160055090ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.02-25-2016
20160055091FUZZY COUNTERS FOR NVS TO REDUCE LOCK CONTENTION - A method for data management in a computing storage environment includes a processor device, operable in the computing storage environment, that divides a plurality of counters tracking write and discard storage operations through Non Volatile Storage (NVS) space into first, accurate, and second, fuzzy, groups where the first, accurate, group is one of updated on a per operation basis, while the second, fuzzy, group is one of updated on a more infrequent basis as compared to the first, accurate group.02-25-2016
20160055092ADAPTIVE RECORD CACHING FOR SOLID STATE DISKS - A storage controller receives a request that corresponds to an access of a track. A determination is made as to whether the track corresponds to data stored in a solid state disk. Record staging to a cache from the solid state disk is performed, in response to determining that the track corresponds to data stored in the solid state disk, wherein each track is comprised of a plurality of records.02-25-2016
20160085454PERFORMING ASYNCHRONOUS DISCARD SCANS WITH STAGING AND DESTAGING OPERATIONS - A controller receives a request to perform staging or destaging operations with respect to an area of a cache. A determination is made as to whether one or more discard scans are being performed or queued for the area of the cache. In response to determining that one or more discard scans are being performed or queued for the area of the cache, the controller avoids satisfying the request to perform the staging or the destaging operations or a read hit with respect to the area of the cache.03-24-2016
20160085673CONCURRENT UPDATE OF DATA IN CACHE WITH DESTAGE TO DISK - Mechanisms for concurrent update of data in cache with destage of data from the cache to disk by a processor device. A second copy of the data is established in the cache and on a cache directory. A first copy of the data and the second copy of the data are adjacently ordered in the cache directory. One of the first and second copies is held for an update operation so as to include a latest data modification, while the remaining copy concurrently is used for a destage operation to disk.03-24-2016
20160098295INCREASED CACHE PERFORMANCE WITH MULTI-LEVEL QUEUES OF COMPLETE TRACKS - Exemplary method, system, and computer program product embodiments for increased cache performance using multi-level queues by a processor device. The method includes distributing to each one of a plurality of central processing units (CPUs) workload operations for creating complete tracks from partial tracks, creating sub-queues of the complete tracks for distributing to each one of the CPUs, and creating demote scan tasks based on workload of the CPUs. Additional system and computer program product embodiments are disclosed and provide related advantages.04-07-2016
20160098296TASK POOLING AND WORK AFFINITY IN DATA PROCESSING - Mechanisms for improving computing system performance by a processor device. System resources are organized into a plurality of groups. Each of the plurality of groups is assigned one of a plurality of predetermined task pools. Each of the predetermined task pools has a plurality of tasks. Each of the plurality of groups corresponds to at least one physical boundary of the system resources such that a speed of an execution of those of the plurality of tasks for a particular one of the plurality of predetermined task pools is optimized by a placement of an association with the at least one physical boundary and the plurality of groups.04-07-2016

Patent applications by Lokesh M. Gupta, Tucson, AZ US

Lokesh Mohan Gupta, Tucson, AZ US

Patent application numberDescriptionPublished
20080201523PRESERVATION OF CACHE DATA FOLLOWING FAILOVER - In a data storage subsystem with disk storage and a pair of clusters, one set of DASD fast write data is in cache of one cluster and in non-volatile data storage of the other. In response to a failover of one of the pair of clusters to a local cluster, the local cluster converts the DASD fast write data in local cache to converted fast write data to prioritize the converted data for destaging to disk storage. In response to failure to destage, the local cluster allocates local non-volatile storage tracks and emulates a host adapter to store the converted fast write data by the local non-volatile storage, reconverting the converted fast write data of the non-volatile storage to local DASD fast write data stored in the local non-volatile storage and stored in the local cache storage.08-21-2008
20080250210COPYING DATA FROM A FIRST CLUSTER TO A SECOND CLUSTER TO REASSIGN STORAGE AREAS FROM THE FIRST CLUSTER TO THE SECOND CLUSTER - Provided are a method, system, and article of manufacture for copying data from a first cluster to a second cluster to reassign storage areas from the first cluster to the second cluster. An operation is initiated to reassign storage areas from a first cluster to a second cluster, wherein the first cluster includes a first cache and a first storage unit and the second cluster includes a second cache and a second storage unit. Data in the first cache for the storage areas to reassign to the second cluster is copied to the second cache. Data in the first storage unit for storage areas remaining assigned to the first cluster is copied to the second storage unit.10-09-2008
20090300298MEMORY PRESERVED CACHE TO PREVENT DATA LOSS - A method, system, and computer program product for preserving data in a storage subsystem having dual cache and dual nonvolatile storage (NVS) through a failover from a failed cluster to a surviving cluster is provided. A memory preserved indicator is initiated to mark tracks on a cache of the surviving cluster to be preserved, the tracks having an image in an NVS of the failed cluster. A destage operation is performed to destage the marked tracks. Subsequent to a determination that each of the marked tracks have been destaged, the memory preserved indicator is disabled to remove the mark from the tracks. If the surviving cluster reboots previous to each of the marked tracks having been destaged, the cache is verified as a memory preserved cache, the marked tracks are retained for processing while all unmarked tracks are removed, and the marked tracks are processed.12-03-2009
20090300408MEMORY PRESERVED CACHE FAILSAFE REBOOT MECHANISM - A method, system and computer program product for preserving data in a storage subsystem having dual cache and dual nonvolatile storage (NVS) through a failover from a failed cluster to a surviving cluster, the surviving cluster undergoing a rebooting process, is provided. A memory preserved indicator associated with a cache of the surviving cluster is detected. The memory preserved indicator designates marked tracks having an image in an NVS of the failed cluster to be preserved through the rebooting process. A counter in a data structure of the surviving cache is incremented. If a value of the counter exceeds a predetermined value, a cache memory is initialized, and the marked tracks are removed from the cache to prevent an instance of repetitive reboots caused by a corrupted structure in the cache memory.12-03-2009
20100037226GROUPING AND DISPATCHING SCANS IN CACHE - A method, system, and computer program product for grouping and dispatching scans in a cache directory of a processing environment is provided. A plurality of scan tasks is aggregated from a scan wait queue into a scan task queue. The plurality of scan tasks is determined by selecting one of (1) each of the plurality of scan tasks on the scan wait queue, (2) a predetermined number of the plurality of scan tasks on the scan wait queue, and (3) a set of scan tasks of a similar type on the scan wait queue. A first scan task from the plurality of scan tasks is selected from the scan task queue. The scan task is performed.02-11-2010
20100191925DEFERRED VOLUME METADATA INVALIDATION - A method, system, and computer program product for managing modified metadata in a storage controller cache pursuant to a recovery action by a processor in communication with a memory device is provided. A count of modified metadata tracks for a storage rank is compared against a predetermined criterion. If the predetermined criterion is met, a storage volume having the storage rank is designated with a metadata invalidation flag to defer metadata invalidation of the modified metadata tracks until after the recovery action is performed.07-29-2010
20120131293DATA ARCHIVING USING DATA COMPRESSION OF A FLASH COPY - Embodiments of the disclosure relate to archiving data in a storage system. An exemplary embodiment comprises making a flash copy of data in a source volume, compressing data in the flash copy wherein each track of data is compressed into a set of data pages, and storing the compressed data pages in a target volume. Data extents for the target volume may be allocated from a pool of compressed data extents. After each stride worth of data is compressed and stored in the target volume, data may be destaged to avoid destage penalties. Data from the target volume may be decompressed from a flash copy of the target volume in a reverse process to restore each data track, when the archived data is needed. Data may be compressed and uncompressed using a Lempel-Ziv-Welch process.05-24-2012
20130212347MULTI-TARGET, POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a source volume in a multi-target architecture is described. The multi-target architecture includes a source volume and multiple target volumes mapped thereto. In one embodiment, such a method includes copying data in a track of the source volume to a corresponding track of a target volume (target x). The method enables one or more sibling target volumes (siblings) mapped to the source volume to inherit the data from the target x. When the data is successfully copied to the target x, the method performs a write to the track of the source volume. Other methods for reading and writing data to volumes in the multi-target architecture are also described.08-15-2013
20130219122MULTI-STAGE CACHE DIRECTORY AND VARIABLE CACHE-LINE SIZE FOR TIERED STORAGE ARCHITECTURES - A method in accordance with the invention includes providing first, second, and third storage tiers, wherein the first storage tier acts as a cache for the second storage tier, and the second storage tier acts as a cache for the third storage tier. The first storage tier uses a first cache line size corresponding to an extent size of the second storage tier. The second storage tier uses a second cache line size corresponding to an extent size of the third storage tier. The second cache line size is significantly larger than the first cache line size. The method further maintains, in the first storage tier, a first cache directory indicating which extents from the second storage tier are cached in the first storage tier, and a second cache directory indicating which extents from the third storage tier are cached in the second storage tier.08-22-2013
20130219141CASCADED, POINT-IN-TIME-COPY ARCHITECTURE WITH DATA DEDUPLICATION - A method for performing a write to a volume x in a cascaded architecture is described. In one embodiment, such a method includes determining whether the volume x has a child volume, wherein each of the volume x and the child volume have a target bit map (TBM) associated therewith. The method then determines whether the TBMs of both the volume x and the child volume are set. If the TBMs are set, the method finds a higher source (HS) volume from which to copy the desired data to the child volume. Finding the HS volume includes travelling up the cascaded architecture until the source of the data is found. Once the HS volume is found, the method copies the data from the HS volume to the child volume and performs the write to the volume x. A method for performing a read is also disclosed herein.08-22-2013
20130219142DELETING RELATIONS IN MULTI-TARGET, POINT-IN-TIME-COPY ARCHITECTURES WITH DATA DEDUPLICATION - A method for deleting a relation between a source and a target in a multi-target architecture is described. The multi-target architecture includes a source and multiple targets mapped thereto. In one embodiment, such a method includes initially identifying a relation for deletion from the multi-target architecture. A target associated with the relation is then identified. The method then identifies a sibling target that inherits data from the target. Once the target and the sibling target are identified, the method copies the data from the target to the sibling target. The relation between the source and the target is then deleted. A corresponding computer program product is also disclosed and claimed herein.08-22-2013
20140082231EFFICIENT PROCESSING OF CACHE SEGMENT WAITERS - For a plurality of input/output (I/O) operations waiting to assemble complete data tracks from data segments, a process, separate from a process responsible for the data assembly into the complete data tracks, is initiated for waking a predetermined number of the waiting I/O operations. A total number of I/O operations to be awoken at each of an iterated instance of the waking is limited.03-20-2014
20140082254RECOVERY FROM CACHE AND NVS OUT OF SYNC - For cache/data management in a computing storage environment, incoming data segments into a Non Volatile Storage (NVS) device of the computing storage environment are validated against a bitmap to determine if the incoming data segments are currently in use. Those of the incoming data segments determined to be currently in use are designated to the computing storage environment to protect data integrity.03-20-2014
20140082292EFFICIENT CACHE VOLUME SIT SCANS - A processor, operable in a computing storage environment, allocates portions of a Scatter Index Table (SIT) disproportionately between a larger portion dedicated for meta data tracks, and a smaller portion dedicated for user data tracks, and processes a storage operation through the disproportionately allocated portions of the SIT using an allocated number of Task Control Blocks (TCB).03-20-2014
20140082296DEFERRED RE-MRU OPERATIONS TO REDUCE LOCK CONTENTION - Data operations, requiring a lock, are batched into a set of operations to be performed on a per-core basis. A global lock for the set of operations is periodically acquired, the set of operations is performed, and the global lock is freed so as to avoid excessive duty cycling of lock and unlock operations in the computing storage environment.03-20-2014
20140082629PREFERENTIAL CPU UTILIZATION FOR TASKS - A set of like tasks to be performed is organized into a first group. Upon a determined imbalance between dispatch queue depths greater than a predetermined threshold, the set of like tasks is reassigned to an additional group.03-20-2014

Patent applications by Lokesh Mohan Gupta, Tucson, AZ US

Mohit Gupta, Chandler, AZ US

Patent application numberDescriptionPublished
20140120293ELECTROSTATIC DISCHARGE COMPATIBLE DICING TAPE WITH LASER SCRIBE CAPABILITY - The present disclosure relates to the field of fabricating microelectronic devices, wherein a microelectronic device substrate, such as a microelectronic wafer, may be diced into individual microelectronic dice using an adhesive tape which reduces the potential of electrostatic discharge damage by the incorporation or anti-static, and may be compatible with a laser scribing process by the incorporation of ultraviolet light absorbing agents into an adhesive layer of the adhesive tape.05-01-2014

Nidhi Gupta, Phoenix, AZ US

Patent application numberDescriptionPublished
20120021967SYNTHETIC ANTIBODIES - The present invention provides methods for synthetic antibodies, methods for making synthetic antibodies, methods for identifying ligands, and related methods and reagents.01-26-2012
20120065123Synthetic Antibodies - Methods for synthetic antibodies, methods for making synthetic antibodies, methods for identifying ligands, and related methods and reagents.03-15-2012
20120220540SYNBODIES TO AKT1 - The present application provides synbodies against AKT1 differing in amino acid sequence, conjugation chemistry, linker/scaffold, or adjunct moiety. The synbodies are useful for diagnosis and treatment of cancer and as research reagents.08-30-2012
20140128280Synthetic Antibodies - The present invention provides methods for synthetic antibodies, methods for making synthetic antibodies, methods for identifying ligands, and related methods and reagents.05-08-2014

Nidhi Gupta, Tempe, AZ US

Patent application numberDescriptionPublished
20150141296METHODS FOR PERFORMING PATTERNED CHEMISTRY - Provided are methods for performing patterned chemistry and arrays prepared thereby.05-21-2015

Rajiv Gupta, Tucson, AZ US

Patent application numberDescriptionPublished
20090172644SOFTWARE FLOW TRACKING USING MULTIPLE THREADS - Methods, systems and machine readable media are disclosed for performing dynamic information flow tracking. One method includes executing operations of a program with a main thread, and tracking the main thread's execution of the operations of the program with a tracking thread. The method further includes updating, with the tracking thread, a taint value associated with the value of the main thread to reflect whether the value is tainted, and determining, with the tracking thread based upon the taint value, whether use of the value by the main thread violates a specific security policy.07-02-2009

Sandeep K. S. Gupta, Phoenix, AZ US

Patent application numberDescriptionPublished
20150087931SYSTEMS AND METHODS FOR MODEL-BASED NON-CONTACT PHYSIOLOGICAL DATA ACQUISITION - System and method for non-contact acquisition of current physiological data representing a subject. A first electromagnetic wave representing current physiological status of a first subject is modified by a second electromagnetic wave representing current physiological status of a second subject in proximity to the first subject. A parameter of the first electromagnetic wave representing a first physiological status of a first subject is measured with electronic circuitry to extract a parameter of the second electromagnetic wave. Historical physiological data associated with the second subject is acquired. The current physiological data representing current physiological status of the second subject is then derived based on historical physiological data of the second subject and a comparison between the first and second parameters.03-26-2015

Sheetal Gupta, Tuscon, AZ US

Patent application numberDescriptionPublished
20100109740Clamp networks to insure operation of integrated circuit chips - Clamp networks are provided to insure successful operation of a variety of electronic circuits that are realized in the form of integrated circuit chips. These networks are especially suited for use in chips in which on-chip circuits generate a voltage to bias the chip substrate relative to the chip ground. The clamp networks are configured to drive a current between the chip ground and the chip substrate whenever the chip substrate begins to rise above the chip ground during turn on of the chip input voltage. The clamp networks thus insure that the chip substrate is properly biased when the input voltage has been established and that the chip, therefore, functions as intended.05-06-2010

Sudhir Gupta, Chandler, AZ US

Patent application numberDescriptionPublished
20130163417Application level admission overload control - Generally described, the present disclosure relates to communications. More specifically, this disclosure relates to application level admission overload control. In one illustrative embodiment, intelligence can be embedded into a communication system so that it can detect and prevent network attacks without the need of costly network and firewall appliances. The communication system can control the in-flow of network packets to help prevent system overload situations through a packet oriented admission policy, connected oriented admission policy or both. By doing so, it not only makes the communication system more robust, secure and cost effective, but also can prevent service interruptions. This can reduce support calls and prove cost-effective to the customer as well as a solution provider. The communication system can protect network applications from internal network traffic and/or attacks.06-27-2013
Website © 2016 Advameg, Inc.