Class / Patent application number | Description | Number of patent applications / Date published |
714020000 | Plural recovery data sets containing set interrelation data (e.g., time values or log record numbers) | 65 |
20080301496 | Data processor, data processing program, and data processing system - A related data storing unit stores a plurality of sets of related data related to a plurality of controlling units. An operation storing unit stores operation detail of each of the plurality of controlling units as an operation log. A identification data recording unit records a plurality of sets of identification data in the operation log. An abnormality data recording unit records abnormality data in the operation log. A data acquiring unit acquires an abnormality data and one of the identification data. A related data acquiring unit identifies one of the related data corresponding to the one of identification data acquired by the data acquiring unit and acquires the one of related data from the related data storing unit. A resolution data storing unit stores first resolution data to resolve the abnormality occurring in the one of control targets in association with the one of related data acquired by the related data acquiring unit and the abnormality data acquired by the data acquiring unit. A resolution data acquiring unit acquires the first resolution data corresponding to the one of the control targets in which the abnormality occurred using the one of related data acquired by the related data acquiring unit and the abnormality data acquired by the data acquiring unit. | 12-04-2008 |
20080307258 | Distributed Job Manager Recovery - A method is provided for the recovery of an instance of a job manager running on one of a plurality of nodes used to execute the processing elements associated with jobs that are executed within a cooperative data stream processing system. The states of the processing elements are checkpointed to a persistence mechanism in communication with the job manager. From the checkpointed processing element states, the state of each distributed job is determined and checkpointed. Processing element states are also checkpointed locally to the nodes one which the processing elements are running. Following a failure of the job manager, the job manager is reinstantiated on one of the nodes. The recovery instance of the job manger obtains state data for processing elements and jobs from the persistence mechanism and constructs an initial state for jobs and processing elements. These initial states are reconciled against the current states of the processing elements and adjustments are made accordingly. Once the job and processing element states are reconciled, the system is returned to normal operation. | 12-11-2008 |
20090013213 | SYSTEMS AND METHODS FOR INTELLIGENT DISK REBUILD AND LOGICAL GROUPING OF SAN STORAGE ZONES - A method of rebuilding a replacement drive used in a RAID group of drives is disclosed. The rebuilding method includes tracking data modification operations continuously during use of the drives. The method also includes saving the tracked data modifications to a log in a persistent storage, where the tracked data modifications are associated with stripe data present on the drives. Then, rebuilding a failed one of the drives with a replacement drive. The rebuilding is facilitated by referencing the log from the persistent storage, and the log facilitating reading only portions of stripe data from surviving drives and omitting reading of portions from the drives where no data was written. Thus, the rebuilding only rebuilds the stripe data to the replacement drive. Also provided is a zoning method, which enables logical zone creation from storage area networks. | 01-08-2009 |
20090063900 | LOG COLLECTING SYSTEM, COMPUTER APPARATUS AND LOG COLLECTING PROGRAM - A log collecting system includes a computer apparatus and at least one peripheral apparatus connected to the computer apparatus, the computer apparatus collecting a log that records operation of the at least one peripheral apparatus. The peripheral apparatus includes, a first log memory controlling section that stores a first log relating to all operation of the at least one peripheral apparatus in a first log memory region, and a second log memory controlling section that stores, in a second log memory region, a second log indicative of any influence on the operation of the at least one peripheral apparatus among the first logs. The computer apparatus includes, a third log memory controlling section that stores, in a third log memory region, a third log relating to the operation of the computer apparatus concerning the at least one peripheral apparatus, a fourth log memory controlling section that continuously or discontinuously acquires the second log stored in the second log memory region, and stores the second log in a fourth log memory region, a first log acquiring section that acquires, at a predetermined timing, the first log stored in the first log memory region and a log information creating section that creates one log information with the acquired first log, the third log stored in the third log memory region, and the second log stored in the fourth log memory region when the first log acquiring section acquires the first log. | 03-05-2009 |
20090210744 | ENHANCED RAID LEVEL 3 - A method and system of method and system of enhanced RAID level 3 is disclosed. In one embodiment, a method includes allocating three times a physical storage capacity of a data drive to a dedicated parity drive of a ‘n’ physical drives of a redundant array of independent disks, recovering n−1 physical drive failures of the ‘n’ physical drives through a parity-in-parity technique in which certain number of parities generated during an initial write of data may be physically stored and using an XOR function applied to the stored parities to recreate un-stored parities which enable recovery of the n−1 physical drive failures. The method may include creating a superior read/write access capability and/or a superior parity data redundancy through the mirroring. The method may also include recreating the un-stored parities after a time interval that may be specified by a user. | 08-20-2009 |
20100162044 | METHOD FOR ERASURE CODING DATA ACROSS A PLURALITY OF DATA STORES IN A NETWORK - An efficient method to apply an erasure encoding and decoding scheme across dispersed data stores that receive constant updates. A data store is a persistent memory for storing a data block. Such data stores include, without limitation, a group of disks, a group of disk arrays, or the like. An encoding process applies a sequencing method to assign a sequence number to each data and checksum block as they are modified and updated onto their data stores. The method preferably uses the sequence number to identify data set consistency. The sequencing method allows for self-healing of each individual data store, and it maintains data consistency and correctness within a data block and among a group of data blocks. The inventive technique can be applied on many forms of distributed persistent data stores to provide failure resiliency and to maintain data consistency and correctness. | 06-24-2010 |
20110060944 | COMPUTER SYSTEM HAVING AN EXPANSION DEVICE FOR VIRTUALIZING A MIGRATION SOURCE LOGICAL UNIT - A migration destination storage creates an expansion device for virtualizing a migration source logical unit. A host computer accesses an external volume by way of an access path of a migration destination logical unit, a migration destination storage, a migration source storage, and an external volume. After destaging all dirty data accumulated in the disk cache of the migration source storage to the external volume, an expansion device for virtualizing the external volume is mapped to the migration destination logical unit. | 03-10-2011 |
20110078503 | METHOD AND APPARATUS FOR SELECTIVELY ACTIVE DISPERSED STORAGE MEMORY DEVICE UTILIZATION - The method begins with a processing unit receiving an encoded slice for storage. The method continues with the processing unit determining whether to store the encoded slice in one of a first set of memory devices or in one of a second set of memory devices based on metadata associated with the encoded slice, wherein the first set of memory devices are continually active and the second set of memory devices are selectively active. The method continues with the processing unit stores the encoded slice in the one of the second set of memory devices when the encoded slice is to be stored in the one of the second set of memory devices. The method continues with the processing unit de-activating the one of the second set of memory devices, in accordance with a deactivation protocol, after storing the encoded slice. | 03-31-2011 |
20110078504 | INFORMATION PROCESSING APPARATUS HAVING FILE SYSTEM CONSISTENCY RECOVERY FUNCTION, AND CONTROL METHOD AND STORAGE MEDIUM THEREFOR - An information processing apparatus able to recover consistency between file entity data and file management information when detecting an inconsistency therebetween at start-up of the apparatus, while reducing unavailable time of the apparatus as much as possible. A CPU of the information processing apparatus executes a base program stored in a storage unit to check for an abnormality in consistency between file entity data and file management information which are stored in another storage unit. If an abnormality is detected, the CPU executes a program for degeneracy operation stored in still another storage unit to perform a degeneracy operation, and recovers the consistency. | 03-31-2011 |
20110119527 | STORAGE CONTROL SYSTEM AND STORAGE CONTROL METHOD - Unique information including a logical type name is stored in a user data area of a management area as a media of the alternative disk drive to become an alternative of the storage device. Upon using the alternative disk drive, a disk controller reads the unique information of the alternative disk drive, and determines that copy back is unnecessary when the rotating speed and capacity belonging to the unique information of the alternative disk drive are the same as the rotating speed and capacity of the failed disk drive belonging to RAID, and otherwise determines that copy back is necessary. | 05-19-2011 |
20110145638 | DISTRIBUTED STORAGE AND COMMUNICATION - Storing, retrieving, transmitting and receiving data ( | 06-16-2011 |
20110191629 | STORAGE APPARATUS, CONTROLLER, AND METHOD FOR ALLOCATING STORAGE AREA IN STORAGE APPARATUS - A storage apparatus for storing data includes a plurality of physical media provided with storage areas to store data, a storage group determining unit configured to determine, upon detecting a request to write new data to a virtual volume to be accessed, a storage group from which to allocate storage area by selecting a storage group from among a plurality of storage groups made up of the plurality of physical media, wherein the selected storage group is other than any storage groups that include a physical medium where a failure has occurred, and a storage area allocator configured to allocate storage area on the physical media existing within the storage group that was determined by the storage group determining unit to the virtual volume, the size of the storage area corresponds to the data size of the new data. | 08-04-2011 |
20110197094 | SYSTEMS AND METHODS FOR VISUAL CORRELATION OF LOG EVENTS, CONFIGURATION CHANGES AND CONDITIONS PRODUCING ALERTS IN A VIRTUAL - Embodiments of the present disclosure provide methods and systems for detecting and correlating log events, configuration changes and conditions producing alerts within a virtual infrastructure. Other embodiments may be described and claimed. | 08-11-2011 |
20110246826 | COLLECTING AND AGGREGATING LOG DATA WITH FAULT TOLERANCE - Systems and methods of collecting and aggregating log data with fault tolerance are disclosed. One embodiment includes, one or more devices that generate log data, the one or more machines each associated with an agent node to collect the log data, wherein, the agent node generates a batch comprising multiple messages from the log data and assigns a tag to the batch. In one embodiment, the agent node further computes a checksum for the batch of multiple messages. The system may further include a collector device, the collector device being associated with a collector tier having a collector node to which the agent sends the log data; wherein, the collector determines the checksum for the batch of multiple messages received from the agent node. | 10-06-2011 |
20110264956 | MANAGEMENT SYSTEM FOR OUTPUTTING INFORMATION DENOTING RECOVERY METHOD CORRESPONDING TO ROOT CAUSE OF FAILURE - A management server includes a meta rule for identifying an event to be a root cause and a failure recovery method that corresponds to the meta rule for an event capable of occurring in a plurality of node apparatuses, and also displays a cause event to be a root cause of an event detected by the management server, and a method for recovering from this cause event. | 10-27-2011 |
20110289352 | METHOD FOR DATA RECOVERY FOR FLASH DEVICES - The invention provides a method for data recovery. In one embodiment, a memory comprises a plurality of pages for data storage. First, first data is obtained from a host. A first page for storing the first data is then selected from the pages of the memory. A start page link indicating the first page is then stored in the memory. The first data, a first page link indicating a next page, and first FTL fragment data corresponding to the first page are then written into the first page. Next data is then obtained from the host. The next data, a next page link indicating a subsequent page, and FTL fragment data corresponding to the next page are written into the next page. | 11-24-2011 |
20120124419 | NETWORKED RECOVERY SYSTEM - A method and apparatus for networked recovery system is described herein. In one embodiment, a process is provided to obtain a type of recovery selected by a user. A non-volatile partition of a storage volume containing a recovery disk image is accessed. The recovery disk image does not include an installation package. If the obtained type of recovery is a predetermined type of recovery, a network connection is established using the recovery disk image and data is downloaded over the network connection for the obtained type of recovery. The obtained type of recovery of the system is performed. | 05-17-2012 |
20120166872 | CONDENSED FOTA BACKUP - A method and apparatus update an image stored in a memory of a device. A next block writing index n for updating a first target memory block of the memory is determined. Backup data is written to a backup block of the memory when n is an even number. The first target memory block is updated with the new data. The backup data is calculated based on a binary operation between new data corresponding to n and old data stored in a second target memory block corresponding to n+1, and the binary operation has reversibility. If n is the last block writing index, then the binary operation is not used and the backup data is the same as the new data. | 06-28-2012 |
20120198276 | Integrating Content-Laden Storage Media with Storage System - Integrating content into a storage system with substantially immediate access to that content. Providing high reliability and relatively easy operation with a storage system using redundant information for error correction. Having the storage system perform a “virtual write,” including substantially all steps associated with writing to the media to be integrated, except for the step of actually writing data to that media, including rewriting information relating to used disk blocks, and including rewriting any redundant information maintained by the storage system. Integrating the new physical media into the storage system, including accessing content already present on that media, free space already present on that media, and reading and writing that media. Recovering from errors during integration. | 08-02-2012 |
20120290878 | ESTABLISHING TRUST IN A MAINTENANCE FREE STORAGE CONTAINER - A maintenance free storage container includes a plurality of storage servers, wherein the maintenance free storage container allows for multiple storage servers of the plurality of storage servers to be in a failure mode without replacement. The maintenance free storage container further includes a container controller operable to manage failure mode information of the plurality of storage servers, manage mapping of a plurality of virtual storage servers to at least some of the plurality of storage servers based on the failure mode information, communicate storage server access requests with a device external to the maintenance free storage container using addressing of the plurality of virtual storage servers, and communicate the storage server access requests within the maintenance free storage container using addressing of the plurality of storage servers. | 11-15-2012 |
20120304006 | LOW TRAFFIC FAILBACK REMOTE COPY - The local storage performs remote copy to the remote storage. For low traffic failback remote copy, the remote storage performs a delta copy to the local storage, the delta being the difference between the remote storage and local storage. The local storage backs up snapshot data. The remote storage resolves the difference of the snapshot of the local storage and the remote storage. The difference resolution method can take one of several approaches. First, the system informs the timing of snapshot of the local storage to the remote storage and records the accessed area of the data. Second, the system informs the timing of snapshot of the local storage to the remote storage, and the remote storage makes a snapshot and compares the snapshot and remote copied data. Third, the system compares the local data and remote copy data with hashed data. | 11-29-2012 |
20120324284 | TRIPLE PARITY TECHNIQUE FOR ENABLING EFFICIENT RECOVERY FROM TRIPLE FAILURES IN A STORAGE ARRAY - A triple parity (TP) technique reduces overhead of computing diagonal and anti-diagonal parity for a storage array adapted to enable efficient recovery from the concurrent failure of three storage devices in the array. The diagonal parity is computed along diagonal parity sets that collectively span all data disks and a row parity disk of the array. The parity for all of the diagonal parity sets except one is stored on the diagonal parity disk. Similarly, the anti-diagonal parity is computed along anti-diagonal parity sets that collectively span all data disks and a row parity disk of the array. The parity for all of the anti-diagonal parity sets except one is stored on the anti-diagonal parity disk. The TP technique provides a uniform stripe depth and an optimal amount of parity information. | 12-20-2012 |
20120324285 | METHOD, APPARATUS AND SYSTEM FOR DATA DISASTER TOLERANCE - A method, apparatus and system for data disaster tolerance are provided in embodiments of this disclosure, the method comprising: receiving node failure information from a node; detecting along a predecessor direction and a successor direction of a failure node indicated in the node failure information according to a pre-stored node sequence to determine a first effective predecessor node and a first effective successor node, and all failure nodes between the first effective predecessor node and the first effective successor node; instructing those of all effective nodes that have local content registration index stored on the failure nodes and the first effective successor node to perform a primary index recovery process, respectively, so as to recover primary indexes of all of the failure nodes into the primary index of the first effective successor node. | 12-20-2012 |
20130024728 | Reinstatement of database systems in an automatic failover configuration - Techniques used in an automatic failover configuration having a primary database system, a standby database system, and an observer. In the automatic failover configuration, the primary database system remains available even in the absence of both the standby and the observer as long as the standby and the observer become absent sequentially. The failover configuration may use asynchronous transfer modes to transfer redo to the standby and permits automatic failover only when the observer is present and the failover will not result in data loss due to the asynchronous transfer mode beyond a specified maximum. The database systems and the observer have copies of failover configuration state and the techniques include techniques for propagating the most recent version of the state among the databases and the observer and techniques for using carefully-ordered writes to ensure that state changes are propagated in a fashion which prevents divergence. | 01-24-2013 |
20130103982 | LOG FILE COMPRESSION - A compression system identifies one or more fields in a log file based on at least one field rule from among multiple field rules specified in a log file framework. The compression system extracts contents of the log file associated with the one or more fields. The compression system passes the contents associated with the one or more fields to corresponding compression engines from among a multiple compression engines each specified for performing a separate type of compression from among multiple types of compression for each of the one or more fields, wherein each of the one or more fields corresponds to one or more compression engines from among the multiple compression engines. | 04-25-2013 |
20130111265 | METHOD AND SYSTEM FOR RECOVERING AN IMAGE ERROR USING DATA HIDING | 05-02-2013 |
20130159769 | RECTIFYING CORRUPT SEQUENCE VALUES IN DISTRIBUTED SYSTEMS - Embodiments of the present invention relate to detecting and rectifying corruption in a distributed clock in a distributed system. Aspects may include receiving a sequence number used as part of the distributed clock at a node and determining if the sequence number is corrupt. In order to provide an effective mechanism for determining a sequence number is corrupt and taking corrective actions, a valid sequence number range may be determined, a propagation count associated with the sequence number may be evaluated, an estimated sequence number may be calculated, and an epoch number associated with the sequence number may be evaluated. Additionally, in exemplary aspects node with a corrupt trusted sequence values may self diagnosis and terminate associated processes to prevent further propagation of the corrupt sequence number. | 06-20-2013 |
20130173959 | HOME/BUILDING FAULT ANALYSIS SYSTEM USING RESOURCE CONNECTION MAP LOG AND METHOD THEREOF - Provided are a home/building fault analysis system and method using a resource connection map log which compares and analyzes a previous integrated resource state and a current resource state using resource connection map logging information based on a standard resource management model when a fault is generated, provides state information of the resource in which information having high association with a fault resource is mainly changed, and performs an effective fault analysis and process by restoring to the previous resource state, as necessary. According to the prevent invention, when the fault is generated, a synthetic state of resources within a home/building as well as a state of an individual resource may be known from the resource connection map. | 07-04-2013 |
20130179730 | APPARATUS AND METHOD FOR FAULT RECOVERY - An apparatus and a method for fault recovery are provided. The fault recovery apparatus includes a log manager configured to record system resource allocation information about a thread. The fault recovery apparatus further includes a recovery manager configured to create a recovery thread that substitutes for a target thread where a fault has occurred. The fault recovery apparatus further includes a resource manager configured to map a system resource that the target thread has used to the recovery thread based on referencing to the system resource allocation information. | 07-11-2013 |
20130232379 | RESTORING DISTRIBUTED SHARED MEMORY DATA CONSISTENCY WITHIN A RECOVERY PROCESS FROM A CLUSTER NODE FAILURE - A set of data structures are stored in a distributed shared memory (DSM) component and in persistent storage. The DSM component is organized as a matrix of page. The data structure of the set of data structures occupies a column in the matrix of pages. A recovery file is maintained in the persistent storage. The recovery file consists of entries and each one of the entries corresponds to a column in the matrix of pages by a location of each one of the entries. | 09-05-2013 |
20130238932 | REBUILDING SLICES OF A SET OF ENCODED DATA SLICES - A method begins with a processing module initiating a rebuilding process for an encoded data slice of a set of encoded data slices and generating rebuilding information from one or more other encoded data slices of the set of encoded data slices. The method continues with the processing module creating a rebuilt encoded data slice for the encoded data slice based on the rebuilding information. The method continues with the processing module determining whether another encoded data slice of the set of encoded data slices requires rebuilding and when the other encoded data slice requires rebuilding, the method continues with the processing module creating another rebuilt encoded data slice for the other encoded data slice based on the rebuilding information without initiating another rebuilding process for the other encoded data slice. | 09-12-2013 |
20130275808 | Techniques for Virtual Machine Management - A technique for operating a group of virtual machines (VMs) includes utilizing a checkpoint procedure to maintain secondary VMs to assume tasks of primary VMs within a cluster in the event of failover. On failover of a first one of the primary VMs, a first one of the secondary VMs assumes the tasks from the checkpoint immediately preceding a failover event. Each of the primary VMs is connected to receive data from remaining ones of the primary VMs via an internal bus and process the data on receipt. Checkpoints for the primary VMs are synchronized. For each of the primary VMs, release to the external bus of data generated on the basis of received internal bus data is prevented until a subsequent checkpoint has occurred. On failover of one of the primary VMs, all of the primary VMs are directed to initiate failover to an associated one of the secondary VMs. | 10-17-2013 |
20140095929 | INTERFACE FOR RESOLVING SYNCHRONIZATION CONFLICTS OF APPLICATION STATES - Technology is disclosed herein for resolving synchronization conflicts when synchronizing application state data between computing devices. According to at least one embodiment, a server detects a first set of application state data at a first computing device conflicting with a second set of application state data at a second computing device. The first and second sets of application state data represent application states of the same computer application running at the first and second computing devices, respectively. Accordingly, the first computing device presents a user interface prompting a user to choose a preferred set of application state data between the first and second sets of application state data. If the user chooses the second set of application state data as the preferred set, the first computing device uses the second set of application state data to overwrite the first set of application state data at the device. | 04-03-2014 |
20140129875 | METHOD FOR READING KERNEL LOG UPON KERNEL PANIC IN OPERATING SYSTEM - A method for reading a kernel log upon a kernel panic in an operation system is applicable to a computing device including a processing unit and a storage unit, coupled to the processing unit, for storing the kernel and including a log backup partition and a user data partition. The method includes the computing device performing the operating system by the kernel; the computing device generating a kernel log upon performing the operating system, and writing the kernel log into the log backup partition; and upon a kernel panic occurring and then the processing unit being reset, the computing device performing a kernel initialization procedure including reading and then writing the kernel log in the log backup partition into the user data partition, wherein the kernel log in the log backup partition includes information of a process of operating the kernel before the processing unit is reset. | 05-08-2014 |
20140149794 | SYSTEM AND METHOD OF IMPLEMENTING AN OBJECT STORAGE INFRASTRUCTURE FOR CLOUD-BASED SERVICES - A method for storing objects in an object storage system includes the steps of establishing a network connection with a client over an inter-network, receiving an upload request indicating an object to be uploaded by the client, selecting at least two storage nodes on which the object will be stored, receiving the object from the client via the network connection, and streaming the object to each of the selected storage nodes such that the object is stored on each of the selected storage nodes. The method can also include writing an object record associating the object and the selected storage nodes to a shard of an object database and generating a Universally Unique Identifier (UUID). The UUID indicates the shard and the object ID of the object record, such that the object record can be quickly retrieved. Object storage infrastructures are also disclosed. | 05-29-2014 |
20140164831 | METHOD AND APPARATUS FOR MAINTAINING REPLICA SETS - Provided are systems and methods for managing asynchronous replication in a distributed database environment, wherein a cluster of nodes are assigned roles for processing database requests. In one embodiment, the system provides a node with a primary role to process write operations against its database, generate an operation log reflecting the processed operations, and permit asynchronous replication of the operations to at least one secondary node. In another embodiment, the primary node is the only node configured to accept write operations. Both primary and secondary nodes can process read operations. Although in some to settings read requests can be restricted to secondary nodes or the primary node. In one embodiment, the systems and methods provide for automatic failover of the primary node role, can include a consensus election protocol for identifying the next primary node. Further, the systems and methods can be configured to automatically reintegrate a failed primary node. | 06-12-2014 |
20140173341 | ENHANCED RECOVERY OF HIGHLY AVAILABLE COMPUTING SYSTEMS - Exemplary embodiments disclose a method and system for detecting a failure and resuming processing in a computing system encompassing at least two sites, a primary site and a secondary site. In a module, an exemplary embodiment generates a record of a logically consistent state and data of system components of the primary site periodically and transfers the record of a logically consistent state and data of system components of the primary site to the secondary site. In another module, an exemplary embodiment detects a failure in the primary site, halts the generation of the record of a logically consistent state and data of system components of the primary site periodically with a data freeze function, and resumes a processing of the primary site on the secondary site with secondary site components updated with a most recent logically consistent state and data of system components of the primary site. | 06-19-2014 |
20140201569 | DISASTER RECOVERY IN A NETWORKED COMPUTING ENVIRONMENT - In general, embodiments of the present invention provide a DR solution for a networked computing environment such as a cloud computing environment. Specifically, a customer or the like can select a disaster recovery provider from a pool (at least one) of disaster recovery providers using a customer interface to a DR portal. Similarly, using the interface and DR portal, the customer can then submit a request for DR to be performed for a set (at least one) of applications. The customer will then also submit (via the interface and DR portal) DR information. This information can include, among other things, a set of application images, a set of application files, a set of recovery requirements, a designation of one or more specific (e.g., application) components for which DR is desired, dump file(s), database file(s), etc. Using the DR information, the DR provider will then generate and conduct a set of DR tests and provide the results to the customer via the DR portal and interface. In one embodiment, a temporary DR environment can be created (e.g., by the DR provider or the customer) in which the DR tests are conducted. | 07-17-2014 |
20140208158 | SYSTEMS AND METHODS OF DATA TRANSMISSION AND MANAGEMENT - Data communications systems and methods comprise a conductive media infrastructure in communication with a baseband data universe propagating at least one first signal and a broadband data universe propagating at least one second signal. At least one segmentation device is in communication with the conductive media infrastructure and partitions the broadband data universe from the baseband data universe. A coupling device is in communication with the at least one segmentation device and modulates transmission parameters of the second signal such that information travels within the broadband data universe via the conductive media infrastructure and avoids the baseband data universe. Power distribution and management systems and methods are also provided which preserve power distribution via a baseband data universe while one or more devices communicate energy data via a broadband data universe | 07-24-2014 |
20140215267 | VIRTUAL MACHINE PLACEMENT WITH AUTOMATIC DEPLOYMENT ERROR RECOVERY - Embodiments perform automatic selection of hosts and/or datastores for deployment of a plurality of virtual machines (VMs) while monitoring and recovering from errors during deployment. Resource constraints associated with the VMs are compared against resources or characteristics of available hosts and datastores. A VM placement engine selects an optimal set of hosts/datastores and initiates VM creation automatically or in response to administrator authorization. During deployment, available resources are monitored enabling dynamic improvement of the set of recommended hosts/datastores and automatic recovery from errors occurring during deployment. | 07-31-2014 |
20140250326 | METHOD AND SYSTEM FOR LOAD BALANCING A DISTRIBUTED DATABASE PROVIDING OBJECT-LEVEL MANAGEMENT AND RECOVERY - A method and system for managing operational states of database tables within a multiple-database system. If a particular user session issues a query against a target table that causes a data inconsistency, the target table transitions into an errant state and the session will become interrupted. This errant state is then propagated onto any other table associated with the user session. A session-level recovery process can thereafter be executed to repair and restore database tables associated with the interrupted user sessions without the need to take an entire database system offline. | 09-04-2014 |
20140298092 | ADAPTIVE QUIESCE FOR EFFICIENT CROSS-HOST CONSISTENT CDP CHECKPOINTS - A disaster recovery system, including a target datastore for replicating data written to source datastores, and a checkpoint engine (i) for transmitting, at multiple times, quiesce commands to a plurality of host computers, each quiesce command including a timeout period that is adjusted at each of the multiple times, (ii) for determining, at each of the multiple times, whether acknowledgements indicating that a host has successfully stopped writing enterprise data to the source datastores, have been received from each of the host computers within the timeout period, (iii) for marking, at each of the multiple times, a cross-host checkpoint in the target datastore and reducing the timeout period for the quiesce commands at the next time, if the determining is affirmative, and (iv) for increasing, at each of the multiple times, the timeout period for the quiesce commands transmitted at the next time, if the determining is not affirmative. | 10-02-2014 |
20140325273 | Method and Apparatus for Creating a Self Booting Operating System Image Backup on an External USB Hard Disk Drive That is Capable of Performing a Complete Restore to an Internal Sytem Disk - Backup applications that use externally connected hard disk drives for storing full image backups of a windows system disk or compressed image or file by file backups of a windows system disk. A system incrementally updates the images, including the system registry, and puts information on the external drive that makes it bootable. | 10-30-2014 |
20140351639 | RECOVERY OF OPERATIONAL STATE VALUES FOR COMPLEX EVENT PROCESSING BASED ON A TIME WINDOW DEFINED BY AN EVENT QUERY - Methods by a processing system are disclosed that control recovery of operational state values of a complex event processing (CEP) engine that processes values of events. A window size is determined based on a property of an event query. Events' values are retrieved from a distributed log which are restricted to occurring within a timeframe defined based on the window size. The distributed log stores events' values that have been processed by the CEP engine. The retrieved events' values are replayed to the CEP engine for processing to recover the operational state values of the CEP engine. Related processing systems are disclosed that control recovery of operational state values of a CEP engine that processes values of events. | 11-27-2014 |
20140351640 | SYSTEM RESET - Some embodiments of the invention provide techniques whereby a user may perform a system reset (e.g., to address system performance and/or reliability degradation, such as which may be caused by unused applications that unnecessarily consume system resources, an attempted un-install of an application that left remnants of the application behind, and/or other causes). In some embodiments, performing a system reset replaces a first instance of an operating system on the system with a new instance of the operating system, and removes any applications installed on the system, without disturbing the user's data. | 11-27-2014 |
20140365824 | METHOD FOR RECOVERING HARD DISK DATA, SERVER AND DISTRIBUTED STORAGE SYSTEM - A method for recovering hard disk data, a server and a distributed storage system relate to a computer technology. In the method, a data recovery request is received. The request includes at least one ID of sectors whose data is to be recovered. Based on the at least one ID of the sectors whose data is to be recovered, at least one sector whose data is to be recovered is located. Obtain at least one standby sector ID and a file backup corresponding to the at least one ID of the sectors whose data is to be recovered, and locate at least one standby sector based on the at least one standby sector ID. Write, into the at least one standby sector, data that is in the file backup and the same as the data stored in the at least one sector whose data is to be recovered. | 12-11-2014 |
20140365825 | System for Automated Computer Support - Systems and methods for providing automated computer support are described herein. One described method comprises receiving a plurality of snapshots from a plurality of computers, storing the plurality of snapshots in a data store, and creating an adaptive reference model based at least in part on the plurality of snapshots. The described method further comprises comparing at least one of the plurality of snapshots to the adaptive reference model, and identifying at least one anomaly based on the comparison. | 12-11-2014 |
20140372800 | Message Reconciliation During Disaster Recovery - A mechanism is provided for message reconciliation during disaster recovery in an asynchronous replication system. A message is intercepted at a gateway remote from a primary data centre to which the message is being sent. A copy of the message request is stored in a request message history remotely from the primary data centre. The message is forwarded to the primary data centre. A transaction history of the message request is stored at the primary data centre which is mirrored to a disaster recovery site with other data from the primary data centre. In response to determining that the primary data centre has failed, messages in the request message history are compared with messages in the transaction history as retrieved from the disaster recovery site. | 12-18-2014 |
20150012778 | MULTI-CLASS HETEROGENEOUS CLIENTS IN A CLUSTERED FILESYSTEM - A cluster of computer system nodes connected by a storage area network include two classes of nodes. The first class of nodes can act as clients or servers, while the other nodes can only be clients. The client-only nodes require much less functionality and can be more easily supported by different operating systems. To minimize the amount of data transmitted during normal operation, the server responsible for maintaining a cluster configuration database repeatedly multicasts the IP address, its incarnation number and the most recent database generation number. Each node stores this information and when a change is detected, each node can request an update of the data needed by that node. A client-only node uses the IP address of the server to connect to the server, to download the information from the cluster database required by the client-only node and to upload local disk connectivity information. | 01-08-2015 |
20150019911 | ADAPTIVE QUIESCE FOR EFFICIENT CROSS-HOST CONSISTENT CDP CHECKPOINTS - A disaster recovery system, including a target datastore for replicating data written to source datastores, and a checkpoint engine (i) for transmitting, at multiple times, quiesce commands to a plurality of host computers, each quiesce command including a timeout period that is adjusted at each of the multiple times, (ii) for determining, at each of the multiple times, whether acknowledgements indicating that a host has successfully stopped writing enterprise data to the source datastores, have been received from each of the host computers within the timeout period, (iii) for marking, at each of the multiple times, a cross-host checkpoint in the target datastore and reducing the timeout period for the quiesce commands at the next time, if the determining is affirmative, and (iv) for increasing, at each of the multiple times, the timeout period for the quiesce commands transmitted at the next time, if the determining is not affirmative. | 01-15-2015 |
20150026518 | SYSTEM AND METHOD FOR DATA DISASTER RECOVERY - A system includes a production computer machine that includes an operating system and a driver stack. The driver stack includes a file system layer, a recovery driver, a storage layer, a driver layer, a bus driver layer, and a storage device. The system also includes a backup computer processor coupled to the production computer machine via the recovery driver. The recovery driver is configured to commence a recovery of data from the backup computer processor, receive a disk access request from the file system layer, determine if the disk access request accesses data that has not yet been recovered from the backup computer processor, and initiate an on-demand recovery request from the backup computer processor when the data has not been recovered from the backup computer processor. | 01-22-2015 |
20150052396 | STATE INFORMATION RECORDING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND STATE INFORMATION RECORDING METHOD - A state information recording apparatus includes a copying section that copies a recording program from a first memory to a second memory, a detector that detects occurrence of a fault in the state information recording apparatus, a determining section that determines whether or not the recording program copied to the second memory is destroyed, in response to detection of occurrence of the fault, a recording section that records the state information to the non-volatile memory, by executing the recording program in the second memory if it is determined that the recording program copied to the second memory is not destroyed, or by executing the recording program stored in the first memory if it is determined that the recoding program copied to the second memory is destroyed, and a reboot section that reboots the state information recording apparatus after the state information is recorded to the non-volatile memory. | 02-19-2015 |
20150082088 | SYSTEM AND METHOD FOR TAKING SEQUENCE OF DYNAMIC RECOVERY ACTIONS - The present disclosure relates to a system and method for enabling SNMP (Simple Network Management Protocol) based Network Management System to correlate and control sequence of recovery actions to be performed and dynamically change the recovery action sequence across various systems/platforms/devices. Disclosed is a system for taking sequence of dynamic recovery actions in network management system upon occurrence of a fault, in one aspect of the present invention. The system includes an action definition repository containing a sequence of recovery actions for the fault in a particular business scenario. The action definition repository is initialized and updated for every new scenario. The system further includes an action sequence engine being capable of reading the recovery sequence listed in the action definition repository for the fault in the particular business scenario. | 03-19-2015 |
20150113324 | Automated Data Recovery from Remote Data Object Replicas - Machines, systems and methods for recovering data objects in a distributed data storage system, the method comprising storing one or more replicas of a first data object on one or more clusters in one or more data centers connected over a data communications network; recording health information about said one or more replicas, wherein the health information comprises data about availability of a replica to participate in a restoration process; calculating a query-priority for the first data object; querying, based on the calculated query-priority, the health information for the one or more replicas to determine which of the one or more replicas is available for restoration of the object data; calculating a restoration-priority for the first data object based on the health information for the one or more replicas; and restoring the first data object from the one or more of the available replicas, based on the calculated restoration-priority. | 04-23-2015 |
20150113325 | CHECKPOINTING A COLLECTION OF DATA UNITS - A memory module stores working data that includes data units. A storage system stores recovery data that includes sets of one or more data units. Transferring data units between the memory module and the storage system includes: maintaining an order among the data units included in the working data, the order defining a first contiguous portion and a second contiguous portion; and, for each of multiple time intervals, identifying any data units accessed from the working data during the time interval, and adding to the recovery data a set of two or more data units including: one or more data units from the first contiguous portion including any accessed data units, and one or more data units from the second contiguous portion including at least one data unit that has been previously added to the recovery data. | 04-23-2015 |
20150135010 | HIGH AVAILABILITY SYSTEM, REPLICATOR AND METHOD - The present specification provides a high availability system. In one aspect a replicator is situated between a plurality of servers and a network. Each server is configured to execute a plurality of identical message processors. The replicator is configured to forward messages to two or more of the identical message processors, and to accept a response to the message as being valid if there is a quorum of identical responses. | 05-14-2015 |
20150143174 | METHOD AND APPARATUS FOR RECOVERING METADATA LOST DURING AN UNEXPECTED POWER DOWN EVENT - A system including first and second memories and a control module. The first memory stores a first lookup table (LUT) with first metadata. The first metadata maps logical addresses to physical addresses. The first metadata is lost due to an unexpected power down event. The second memory stores an event log and a second LUT with second metadata. The second metadata maps the logical addresses to the physical addresses. The event log includes entries that indicate updated associations between the logical addresses and the physical addresses. The control module, prior to the unexpected power down event, performs segmented flushes that include updating segments of the second metadata with segments of the first metadata. As a result of the unexpected power down event, the control module walks the event log from a first entry to a second entry to recover a single full flush cycle of segments in the first memory | 05-21-2015 |
20150149824 | METHOD AND APPARATUS FOR RECONSTRUCTING AN INDIRECTION TABLE - A memory system contains solid state media for storing data and uses volatile memory for storing an indirection table. The indirection table maps client addresses to media addresses in the solid state media. The solid state media also stores metadata summaries maintaining the mappings of the client addresses to the media addresses within the solid state media. A media controller is configured to reconstruct the indirection table in the volatile memory from the metadata summaries stored in the solid state media based on block timestamps identifying when the metadata summaries were stored in the solid state media. | 05-28-2015 |
20150378840 | ENSURING THE SAME COMPLETION STATUS FOR TRANSACTIONS AFTER RECOVERY IN A SYNCHRONOUS REPLICATION ENVIRONMENT - Disclosed in some examples is a method, the method including detecting that an RDMS is recovering from a failure; sending a request for a last committed transaction on a replication component to the replication component; receiving, from the replication component, the last committed transaction which identifies a transaction that was the last committed transaction at a replication component at a time of RDMS failure; determining that a transaction log on the RDMS includes a transaction that had not yet been replicated at the time of RDMS failure which was committed on the transaction log subsequent to the last committed transaction received from the replication component; and based on that determination rolling back the transaction that had not yet been replicated at the time of RDMS failure. | 12-31-2015 |
20160062818 | TRAFFIC CAPACITY BASED OPTIMIZATION OF SOA FAULT RECOVERY USING LINEAR PROGRAMMING MODEL - Various embodiments are presented for bulk recovery or faults in a service oriented architecture system. The number of faults submitted for recovery is determined based on the capacity of the system. A linear programming model is used to determine the maximum recovery capacity of the system. The maximum recovery capacity is configured to be below the capacity of the system. | 03-03-2016 |
20160092317 | STREAM-PROCESSING DATA - A method for stream-processing data including a missing part in real time and thereafter updating the result of the stream processing. A technique for processing data is included. The technique includes receiving data; detecting a probably missing part in the received data while stream-processing the received data in real time; and comparing master data corresponding to the received data and having no missing part with the probably missing part, and if the received data has the missing part, updating the result of the stream processing using the master data. | 03-31-2016 |
20160170827 | EVENT LOGGING AND ERROR RECOVERY | 06-16-2016 |
20160203202 | METHOD AND APPARATUS FOR MAINTAINING REPLICA SETS | 07-14-2016 |
20190146886 | DATABASE SYSTEM RECOVERY USING PRELIMINARY AND FINAL SLAVE NODE REPLAY POSITIONS | 05-16-2019 |
20220138035 | READABLE DATA DETERMINATION - Data associated with a write request is stored at a storage device of multiple solid-state storage devices. A determination as to whether the data stored at the storage device is readable is made by determining whether a number of subsequent programming operations have been performed since the data was stored at the storage device. A notification that the stored data is readable from the storage device is generated upon determining that the data is readable. | 05-05-2022 |