Patent application number | Description | Published |
20080201725 | ADDRESS REDUCTION FOR DATA STORAGE ENCLOSURES - A data storage enclosure management system of a plurality of service processors is configured to communicate externally via a pair of FC-AL loops. Lead and subsidiary service processors are defined and lead service processors connect to ones of the FC-AL loops with an FC-AL address, and the lead and subsidiary service processors are connected by a secondary communication link. The lead service processor(s) employ an identifier unassociated with the FC-AL address to differentiate communications of the lead service processor from communications of an associated subsidiary service processor, the lead service processor serving as a proxy for the associated subsidiary service processor with respect to the FC-AL address and communicating with the associated subsidiary service processor via the secondary communication link. | 08-21-2008 |
20080208930 | MANAGEMENT OF REDUNDANCY IN DATA ARRAYS - Provided are a method, system, and article of manufacture, wherein a determination is made that a first data array in a plurality of data arrays has to be repaired to replace a failed storage device within the first data array. A storage device is selected from a selected data array of the plurality of data arrays to replace the failed storage device, wherein a data value corresponding to the selected data array is less than the data value corresponding to the first data array. | 08-28-2008 |
20080209253 | SELECTION OF DATA ARRAYS - Provided are a method, system, and article of manufacture, wherein a plurality of data arrays coupled to a storage controller is maintained. Data arrays are selected from the plurality of data arrays based on predetermined selection rules. Data is stored redundantly in the selected data arrays by writing the data to the selected data arrays. | 08-28-2008 |
20080244283 | System and Method for Thresholding System Power Loss Notifications in a Data Processing System - A system and method for thresholding system power loss notifications in a data processing system are provided. Power loss detection modules are provided in a data processing system having one or more data processing devices, such as blades in an IBM BladeCenter® chassis. The power loss detection modules detect the type of infrastructure of the data processing system, a position of a corresponding data processing device within the data processing system, and a capability of the data processing system to provide power during a power loss scenario. The detection module detects various inputs identifying these types of data processing system and power system characteristics and provides logic for defining a set of behaviors during a power loss scenario, e.g., behaviors for sending system notifications of imminent power loss. The detection of the various inputs and the defining of a set of behaviors may be performed statically and/or dynamically. | 10-02-2008 |
20080244311 | System and Method for Thresholding System Power Loss Notifications in a Data Processing System Based on Vital Product Data - A system and method for thresholding system power loss notifications in a data processing system are provided. Power loss detection modules are provided in a data processing system having one or more data processing devices, such as blades in an IBM BladeCenter® chassis. The power loss detection modules detect the type of infrastructure of the data processing system, a position of a corresponding data processing device within the data processing system, and a capability of the data processing system to provide power during a power loss scenario. The detection module detects various inputs identifying these types of data processing system and power system characteristics and provides logic for defining a set of behaviors during a power loss scenario, e.g., behaviors for sending system notifications of imminent power loss. The detection of the various inputs and the defining of a set of behaviors may be performed statically and/or dynamically. | 10-02-2008 |
20080256420 | ERROR CHECKING ADDRESSABLE BLOCKS IN STORAGE - Provided are a method, system, and article of manufacture for error checking addressable blocks in storage. Addressable blocks of data are stored in a storage in stripes, wherein each stripe includes a plurality of data blocks for one of the addressable blocks and at least one checksum block including checksum data derived from the data blocks for the addressable block. A write request is received to modify data in one of the addressable blocks. The write and updating the checksum are performed in the stripe having the modified addressable block. An indication is made to perform an error checking operation on the stripe for the modified addressable block in response to the write request, wherein the error checking operation reads the data blocks and the checksum in the stripe to determine if the checksum data is accurate. An error handling operation is initiated in response to determining that the checksum data is not accurate. | 10-16-2008 |
20080320058 | APPARATUS, SYSTEM, AND METHOD FOR MAINTAINING DYNAMIC PERSISTENT DATA - An apparatus, system, and method are disclosed for maintaining dynamic persistent data. A selection module selects the most recent metadata. A verification module verifies that the metadata has been successfully updated. A validation module validates that the metadata is accurate. A communication module communicates the Persistent Storage Device data to a system processor if the metadata is validated. A storage module may store primary and secondary information of data, metadata, and data state variables. | 12-25-2008 |
20090049239 | CONSISTENT DATA STORAGE SUBSYSTEM CONFIGURATION REPLICATION IN ACCORDANCE WITH PORT ENABLEMENT SEQUENCING OF A ZONEABLE SWITCH - Consistency for replicating data storage subsystem configurations in accordance with a “golden” configuration file. A data storage subsystem comprises a blade system configured to support a plurality of blades and a storage system, each arranged in a predetermined slot of the blade system, and at least one zoneable switch whose zoning is disabled at power on. A management module operates the blade system to power on all slots. The storage system, in accordance with a “golden” configuration file, transfers port enablement sequencing to the switch, and the switch enables and zones ports in sequence to allow the server blades to see the storage system in accordance with the port enablement sequence. The storage system is configured with the “golden” configuration file to log on the server blades in accordance with the port enablement sequence to logically configure the server blades in accordance with the “golden” configuration file. | 02-19-2009 |
20090049290 | CONSISTENT DATA STORAGE SUBSYSTEM CONFIGURATION REPLICATION - Consistency for replicating data storage subsystem configurations in accordance with a “golden” configuration file. A data storage subsystem comprises a blade system with a plurality of slots, the blade system configured to support a plurality of blades and a storage system, each arranged in a predetermined slot of the blade system. A management module operates the blade system to first power on the storage system, and subsequently to power on the plurality of server blades in a sequential order that matches a blade system natural boot sequence order, skipping the storage system, and the storage system is configured with the “golden” configuration file to log on the server blades in accordance with the power on sequence to logically configure the server blades in accordance with the “golden” configuration file. | 02-19-2009 |
20090049291 | CONSISTENT DATA STORAGE SUBSYSTEM CONFIGURATION REPLICATION IN ACCORDANCE WITH SEQUENCE INFORMATION - Consistency for replicating data storage subsystem configurations in accordance with a “golden” configuration file. A data storage subsystem comprises a blade system with a plurality of slots, the blade system configured to support a plurality of blades and a storage system, each arranged in a predetermined slot of the blade system. A management module operates the blade system to first power on the storage system. In accordance with a “golden” configuration file, the storage system passes sequence information to the management module. The management module powers on the plurality of server blades in accordance with the sequence information. The storage system is configured with the “golden” configuration file to log on the server blades in accordance with the power on sequence to logically configure the server blades in accordance with the “golden” configuration file. | 02-19-2009 |
20090049334 | Method and Apparatus to Harden an Internal Non-Volatile Data Function for Sector Size Conversion - A sector conversion device includes a non-volatile memory area that is used to save two sectors' worth of data when power is lost during the sector conversion process. These two sectors of data are stored in the non-volatile memory area within the sector conversion device itself. The non-volatile memory within the sector conversion device is connected to the main internal memory of the device by a special link that is wider than the normal word size of the buffer. When power is lost to the storage enclosure during a scenario where data is being written to the hard disk drives, which involves sector conversion, the internal processor of the sector conversion device immediately initiates a transfer from the volatile buffer queue memory to the non-volatile memory. The information that is transferred (hardened) is the two sectors of data that were involved in the sector conversion process. | 02-19-2009 |
20090055599 | CONSISTENT DATA STORAGE SUBSYSTEM CONFIGURATION REPLICATION - Consistency for replicating data storage subsystem configurations in accordance with a “golden” configuration file. A data storage subsystem comprises a blade system with a plurality of slots, the blade system configured to support a plurality of blades and a storage system, each arranged in a predetermined slot of the blade system. The storage system arranges a logical configuration of the server blades in accordance with a “golden” configuration file. The server blade slot versus WWN information is collected and provided to the storage system. The storage system converts the “golden” configuration file slot information to WWNs. The server blades are enabled for access to said storage system as they log on with WWNs in accordance with the “golden” configuration file. | 02-26-2009 |
20090063768 | ALLOCATION OF HETEROGENEOUS STORAGE DEVICES TO SPARES AND STORAGE ARRAYS - A plurality of storage devices of a plurality of types is provided. A plurality of criteria is associated for each of the plurality of storage devices, based on characteristics of the plurality of storage devices, wherein the plurality of criteria can be used to determine whether a selected storage device is a compatibility spare for a storage device in a storage device array, and whether the selected storage device is an availability spare for the storage device in the storage device array. A determination is made by a spare management application, based on at least the plurality of criteria and at least one optimality condition, of a first set of storage devices selected from the plurality of storage devices to be allocated to a plurality of storage device arrays, and of a second set of storage devices selected from the plurality of storage devices to be allocated as spares for the plurality of storage device arrays. An allocation is made of the first set of storage devices to the plurality of storage device arrays. An allocation made of the second set of storage devices as spares for the plurality of storage device array. | 03-05-2009 |
20090187786 | PARITY DATA MANAGEMENT SYSTEM APPARATUS AND METHOD - An apparatus for parity data management receives a write command and write data from a computing device. The apparatus also builds a parity control structure corresponding to updating a redundant disk array with the write data and stores the parity control structure in a persistent memory buffer of the computing device. The apparatus also updates the redundant disk array with the write data in accordance with a parity control map and restores the RAID controller parity map from the parity control structure as part of a data recovery operation if updating the redundant disk array with the write data is interrupted by a RAID controller failure resulting in a loss of the RAID controller parity map. In certain embodiments, the parity control structure is a RAID controller parity map. | 07-23-2009 |
20090293063 | MINIMIZATION OF READ RESPONSE TIME - A method, system and computer program product for minimizing read response time in a storage subsystem including a plurality of resources is provided. A middle logical block address (LBA) is calculated for a read request. A preferred resource of the plurality of resources is determined by calculating a minimum seek time based on a closest position to a last position of a head at each resource of the plurality of resources, estimated from the middle LBA. The read request is directed to at least one of the preferred resource or an alternative resource. | 11-26-2009 |
20100064102 | COMPONENT DISCOVERY IN MULTI-BLADE SERVER CHASSIS - A method for discovering components on a multi-blade server chassis having an input/output (I/O) module in communication with a plurality of components managed by an advanced management module (AMM) is provided. The I/O module includes a switch module, a redundant array of independent disks (RAID) controller and a baseboard management controller (BMC). A first address for a first component of the plurality of components is received. The first address is provided by a user. The switch module is queried for additional addresses for additional components of the plurality of components. The switch module obtains the additional addresses for the additional components from a first persistent storage location associated with the switch module. The first and additional addresses for the first and additional components are stored in a second persistent storage location accessible by the BMC, the switch module, and the RAID controller. | 03-11-2010 |