28th week of 2015 patent applcation highlights part 35 |
Patent application number | Title | Published |
20150193229 | EFFICIENT PROPAGATION OF SOFTWARE BASED ON SNAPSHOT TECHNOLOGIES ENABLING MORE EFFICIENT INFORMAL SOFTWARE BUILDS - Systems and methods are disclosed that substantially overcome the long delays and unproductive cycle time inherent in managing an informal (friendly) build platform. The systems and methods disclosed herein ensure that an official software base residing on a production machine propagates efficiently to the machine that generates the friendly builds, and furthermore, the friendly-build machine experiences minimal unproductive cycle time between successive friendly builds. Accordingly, developers experience improved access to, and more efficient use of, the computing device that generates the friendly executables. | 2015-07-09 |
20150193230 | ENTITY WIDE SOFTWARE TRACKING AND MAINTENANCE REPORTING TOOL - Embodiments of the invention are directed to a system, method, or computer program product for providing an entity wide software tracking and maintenance tool for monitoring maintenance and software updates across an entity. As such, the invention provides a uniform and stable method of monitoring software updates and software instillation across an entity's information technology infrastructure. The invention receives software updates or new programs for instillation across the entity. The invention then creates a tracking module to link to the maintenance update. The tracking module is a self-contained, self-describing module that contains static information related to the maintenance. Subsequently, the tracking module allows users to monitor the progress of maintenance levels. In some embodiments, the user may query the system to determine the progress of a specific maintenance. In some embodiments, the system may automatically notify a user of the success or failure of maintenance at one or more stages. | 2015-07-09 |
20150193231 | MANAGING CHANGE-SET DELIVERY - An approach that analyzes and manages unresolved (i.e., pending, outgoing) change-sets is provided. Specifically, this approach parses the change-set into a plurality (i.e., one or more) of changes to determine the impact each change may have. An alert may be provided to the user indicating whether the change-set should be checked-in based on the determined impact. Specifically, a change-set management tool provides this capability. The change-set management tool includes a parsing module configured to receive an outgoing change-set and to parse the change-set into a plurality of changes. The change-set management tool further comprises an evaluation module configured to evaluate an impact that each of the plurality of changes within the change-set has on source code external to the change-set and other changes of the plurality of changes within the change-set. | 2015-07-09 |
20150193232 | SYSTEMS AND METHODS FOR PROCESSING SENSOR DATA WITH A STATE MACHINE - A state machine may be implemented in hardware by representing each state with one operational code (opcode) such that each opcode may be read from memory in sequential order. The state machine includes a plurality of states linked by at least one transition triggered by data input. The data input may be motion sensor data such that the state machine is configured to recognize a pattern of motion corresponding to a gesture or an activity. | 2015-07-09 |
20150193233 | USING A SINGLE-INSTRUCTION PROCESSOR TO PROCESS MESSAGES - The disclosed embodiments describe single-instruction processors that operates upon messages received from a network interface. A single-instruction processor comprises a register file, a functional unit, a bus connecting the register file and the functional unit, and a format decoder that receives messages from a network interface. This single-instruction processor supports a single instruction type (e.g., a “move instruction”) that specifies operands to be transferred via the bus. During operation, the format decoder is configured to write a parameter from a received message to the register file. A move instruction moves this parameter from the register file to the functional unit via the bus. The functional unit then uses the parameter to perform an operation. | 2015-07-09 |
20150193234 | REGISTER ALLOCATION FOR VECTORS - This disclosure describes techniques for allocating registers in a computing system that supports vector physical registers. The techniques for allocating registers may allocate physical registers to vector virtual registers based on priority information that is indicative of a relative importance of allocating respective vector virtual registers as vectors rather than scalars. The techniques for allocating registers may involve allocating physical registers to the vector virtual registers in an order that is determined based on the priority information. The techniques for allocating registers may further involve, in response to determining that no vector physical registers are available to assign to a vector virtual register, determining whether to perform vector-scalar live interval splitting for the vector virtual register, spill other register live intervals into a memory in order to allocate the vector virtual register as a vector, or assign scalar physical registers to the vector virtual register based on the priority information. | 2015-07-09 |
20150193235 | ARITHMETIC LOGICAL UNIT ARRAY, MICROPROCESSOR, AND METHOD FOR DRIVING AN ARITHMETIC LOGICAL UNIT ARRAY - In various embodiments an arithmetic logical unit array is provided, which may include: at least two data registers for storing data, a plurality of fixed instruction registers for storing machine code instructions, and at least one programmable instruction register for storing instruction data being representative for a machine code instruction. A selection circuit of the arithmetic logical unit array may be configured to select one of the machine code instructions from the fixed instruction registers or the machine code instruction represented by the instruction data. An arithmetic logical unit of the arithmetic logical unit array may be configured to apply an operation in accordance with the machine code instruction selected by the selection circuit to the data stored in the data registers. | 2015-07-09 |
20150193236 | LOW-MISS-RATE AND LOW-MISS-PENALTY CACHE SYSTEM AND METHOD - A method for assisting operations of a processor core coupled to a first memory and a second memory includes: examining instructions being filled from the first memory to the second memory to extract instruction information containing at least branch information of the instructions, and creating a plurality of tracks based on the extracted instruction information. Further, the method includes filling one or more instructions from the first memory to the second memory based on one or more tracks from the plurality of tracks before the processor core starts executing the instructions, such that the processor core fetches the instructions from the second memory for execution. Filling the instructions further includes pre-fetching from the first memory to the second memory instruction segments containing the instructions corresponding to at least two levels of branch target instructions based on the one or more tracks. | 2015-07-09 |
20150193237 | TECHNIQUES FOR HYBRID COMPUTER THREAD CREATION AND MANAGEMENT - A technique for operating a computer system to support an application, a first application server environment, and a second application server environment includes intercepting a work request relating to the application issued to the first application server environment prior to execution of the work request. A thread adapted for execution in the first application server environment is created. A context is attached to the thread that non-disruptively modifies the thread into a hybrid thread that is additionally suitable for execution in the second application server environment. The hybrid thread is returned to the first application server environment. | 2015-07-09 |
20150193238 | METHODS AND SYSTEMS FOR OPTIMALLY SELECTING AN ASSIST UNIT - Methods, apparatuses, and systems that allow a microprocessor to optimally select an assist unit (co-processor) to reduce completion times for completing processing requests to execute functions. The methods, apparatuses, and systems include assist unit hardware, assist unit management software, or a combination of the two to optimally select the assist unit for completing a specific processing request. In optimally selecting an assist unit, the methods, apparatuses, and systems calculate estimated times for completing the processing request with conventional means and with assist units. The times are then compared to determine the fastest time for completing a specific processing request. | 2015-07-09 |
20150193239 | ENHANCED SECURITY AND RESOURCE UTILIZATION IN A MULTI-OPERATING SYSTEM ENVIRONMENT - An approach is provided for operating a mobile device having first and second operating systems (OSs) installed. While the mobile device is executing the first OS but not the second OS, (1) based in part on battery power remaining in the mobile device being less than a threshold and a lower power consumption of the mobile device if executing the second OS but not the first OS, execution of the first OS is terminated and the second OS is executed in the mobile device; and/or (2) based in part on (a) the mobile device being currently located in the first geographic region which has a greater likelihood of attack on the mobile device, and (b) the mobile device being more secure while operating the second OS but not the first OS, execution of the first OS is terminated and the second OS is executed in the mobile device. | 2015-07-09 |
20150193240 | METHOD FOR IMPROVING THE PERFORMANCE OF COMPUTERS - In a method for improving the performance of a computer system by releasing computer resources, a list P of programs installed on a computer system is determined. All relevant extension points EP of the computer system are searched for registered entries. A list A of automatically starting programs is generated by assigning the registered entries at the relevant extension points EP to the installed programs, respectively. The list A of the automatically starting programs is compared with a list S of system-required programs and a list V of used programs. Programs that are not system-required and programs that have not been used for a longer period of time are deactivated and computer resources that have been used by the deactivated programs are released. The deactivation of programs can be done by the user or automatically and can be cancelled when necessary. | 2015-07-09 |
20150193241 | MULTI-OPERATING SYSTEM MOBILE AND OTHER COMPUTING DEVICES WITH PROXY APPLICATIONS RUNNING UNDER A BROWSER - The invention provides, in some aspects, a computing device that includes a central processing unit that is coupled to a hardware interface and that executes a native operating system including one or more native runtime environments within which native software applications are executing. A first native software application executing within the one or more native runtime environments defines one or more hosted runtime environments within which hosted software applications are executing. One or more further native software applications (“IO proxies”), each executing within the one or more native runtime environments and each corresponding to a respective one of the one or more hosted software applications, receives the graphics generated by the respective hosted software application and effects writing of those graphics to the video frame buffer for presentation on the display of the computing device. | 2015-07-09 |
20150193242 | INSTRUCTION WINDOW CENTRIC PROCESSOR SIMULATION - A method and system are described for simulating a set of instructions to be executed on a processor. The method comprises performing a performance simulation of the processor over a number of simulation cycles. Performing the performance simulation of the processor comprises modeling an instruction window for the cycle and deriving a performance parameter of the processor without modeling a reorder buffer, issue queue(s), register renaming, load-store queue(s) and other buffers of the processor. | 2015-07-09 |
20150193243 | SYSTEM AND METHOD FOR EXTRACTING DATA FROM LEGACY DATA SYSTEMS TO BIG DATA PLATFORMS - A system and method for extracting data from legacy data management systems and making the data available to Big Data platforms with minimal use of compute and intermediate storage resources. | 2015-07-09 |
20150193244 | AUTONOMOUSLY MANAGED VIRTUAL MACHINE ANTI-AFFINITY RULES IN CLOUD COMPUTING ENVIRONMENTS - System, method, and computer program product to perform an operation comprising collecting performance metrics of a first virtual machine, and defining, based on the collected performance metrics, at least one rule to restrict collocation of the first virtual machine with other virtual machines on one or more host machines in a cloud computing environment. | 2015-07-09 |
20150193245 | AUTONOMOUSLY MANAGED VIRTUAL MACHINE ANTI-AFFINITY RULES IN CLOUD COMPUTING ENVIRONMENTS - System, method, and computer program product to perform an operation comprising collecting performance metrics of a first virtual machine, and defining, based on the collected performance metrics, at least one rule to restrict collocation of the first virtual machine with other virtual machines on one or more host machines in a cloud computing environment. | 2015-07-09 |
20150193246 | APPARATUS AND METHOD FOR DATA CENTER VIRTUALIZATION - An apparatus and method for a virtual data center. For example, one embodiment of a virtual data center apparatus comprises: a virtual datacenter layer comprising a plurality of virtual device controllers and data defining relationships between the virtual device controllers; wherein each of the virtual device controllers represents a physical data center resource and its associated configuration; a cloud mediation layer to map the plurality of virtual device controllers to the associated data center resources on a physical data center in response to a command to project the virtual data layer to a physical data center. | 2015-07-09 |
20150193247 | EFFICIENT GRAPHICS VIRTUALIZATION WITH ADDRESS BALLOONING - Systems and methods may provide for identifying an assigned address space of a virtual machine (VM), wherein the assigned address space is associated with a graphics memory. Additionally, the assigned address space may be ballooned to disable usage by the VM of a remaining address space in the graphics memory that is not assigned to the VM. In one example, a view of the assigned address space by the VM may be identical to a view of the assigned address space by a virtual machine monitor (VMM) associated with the VM. | 2015-07-09 |
20150193248 | Non-Blocking Unidirectional Multi-Queue Virtual Machine Migration - Methods, systems, and computer program products for non-blocking unidirectional multi-queue virtual machine migration are provided. A computer-implemented method may include maintaining information to track an association between a memory area in a virtual machine and a stream for a first stage of virtual machine migration, detecting one or more updates to the memory area during the first stage of migration, examining the information to identify the stream associated with the memory area for the first stage of migration, sending the updates to the memory area on the identified stream during the first stage of migration, modifying the information to associate the memory area with a new stream for a second stage of the migration, and sending updates to the memory area on the new stream during the second stage of migration. | 2015-07-09 |
20150193249 | IDLE PROCESSOR MANAGEMENT IN VIRTUALIZED SYSTEMS VIA PARAVIRTUALIZATION - A system and method are disclosed for managing idle processors in virtualized systems. In accordance with one embodiment, a hypervisor executing on a host computer receives an anticipated idle time for a processor of the host computer system from a guest operating system of a virtual machine executing on the host computer system. When the anticipated idle time divided by a performance multiplier exceeds an exit time of a first power state of the processor, the processor is caused to be halted. | 2015-07-09 |
20150193250 | VIRTUAL COMPUTER SYSTEM, MANAGEMENT COMPUTER, AND VIRTUAL COMPUTER MANAGEMENT METHOD - A virtual computer system includes: a plurality of computers on which at least one virtual computer operates on a hypervisor; and a management computer that manages the plurality of computers, wherein the management computer includes: an input unit that accepts an operation input of an operator; a screen generation unit that acquires, in a state where a first virtual computer operates on a first computer, progress information concerning a live migration in which the first virtual computer is transferred from the first computer to a second computer, the progress information being acquired from the first computer, that generates statistical information concerning the live migration on the basis of the acquired progress information, and that generates a statistics screen containing the statistical information; and an output unit that displays the statistics screen. | 2015-07-09 |
20150193251 | Method and system for gracefully shutdown virtual system - The present disclosure discloses a method for gracefully shutdown a virtual system, and the method includes: gracefully shutdown configuration information configured for the virtual system is received and stored; and sequential shutting down of virtual machines is performed according to the stored gracefully shutdown configuration information when a request for shutting down the virtual system is received. Technical solutions of embodiments of the present disclosure can solve problems caused by the fact that virtual machines are shut down according completely to a reverse order of starting-up in the prior art. | 2015-07-09 |
20150193252 | METHOD AND APPARATUS FOR REMOTELY PROVISIONING SOFTWARE-BASED SECURITY COPROCESSORS - A virtual security coprocessor is created in a first processing system. The virtual security coprocessor is then transferred to a second processing system, for use by the second processing system. For instance, the second processing system may use the virtual security coprocessor to provide attestation for the second processing system. In an alternative embodiment, a virtual security coprocessor from a first processing system is received at a second processing system. After receiving the virtual security coprocessor from the first processing system, the second processing system uses the virtual security coprocessor. Other embodiments are described and claimed. | 2015-07-09 |
20150193253 | METHOD AND APPARATUS OF ACCESSING DATA OF VIRTUAL MACHINE - A methods and device for accessing virtual machine (VM) data are described. A computing device for accessing virtual machine comprises an access request process module, a data transfer proxy module and a virtual disk. The access request process module receives a data access request sent by a VM and adds the data access request to a request array. The data transfer proxy module obtains the data access request from the request array, maps the obtained data access request to a corresponding virtual storage unit, and maps the virtual storage unit to a corresponding physical storage unit of a distributed storage system. A corresponding data access operation may be performed based on a type of the data access request. | 2015-07-09 |
20150193254 | VIRTUAL MEDIA SHELF - A method and system for providing a guest with virtual media that can be read by the guest. A hypervisor hosted by a computer system presents a guest-to-host channel to a guest in the computer system. The hypervisor receives content from the guest via the guest-to-host channel, the content to be stored and managed by the hypervisor in a memory area associated with the guest in the computer system, the memory area not being directly accessible to the guest. The hypervisor then receives a request from the guest indicating that the guest is to perform at least one operation on the content, and provides the content for the guest to perform the at least one operation. | 2015-07-09 |
20150193255 | VIRTUAL MACHINE MULTICAST/BROADCAST IN VIRTUAL NETWORK - The performance of multicast and/or broadcasting between virtual machines over a virtual network. A source hypervisor accesses a network message originated from a source virtual machine, and uses the network message to determine a virtual network address associated with destination virtual machines (after potentially resolving group virtual network addresses). Using each virtual network address, the hypervisor determines a physical network address of the corresponding hypervisor that supports the destination virtual machine, and also determines a unique identifier for the destination virtual machine. The source hypervisor may then dispatch the network message along with the unique identifier to the destination hypervisor over the physical network using the physical network address of the hypervisor. The destination hypervisor passes the network message to the destination virtual machine identified by the unique identifier. | 2015-07-09 |
20150193256 | DIAGNOSTIC VIRTUAL MACHINE - A diagnostic virtual machine having access to resources of an infrastructure as a service cloud may be created. A user device may be provided access to the diagnostic virtual machine. In some embodiments, the diagnostic virtual machine may be configured to monitor a cluster of hypervisors, and the resources of the infrastructure as a service cloud which the diagnostic virtual machine has access to may include physical resources of the infrastructure as a service cloud that are associated with the cluster of hypervisors. | 2015-07-09 |
20150193257 | VIRTUAL MACHINE SERVICES - The present disclosure includes methods and systems for providing virtual machine services. A number of embodiments can include a user VM with a virtual workstation, a number of service modules that can provide a number of services without communicating with the user VM and/or the virtual workstation, a communication channel that allows the number of service modules to communicate with each other, a computing device, and a manager. A number of embodiments can also include a virtual machine monitor to enforce an isolation policy within the system. | 2015-07-09 |
20150193258 | VIRTUAL MACHINE PROVISIONING ENGINE - Embodiments described herein extend to methods, systems, and computer program products for setting up, configuring, and customizing one or more virtual machines. A scenario definition file may be accessed and parsed to provide information to a virtual machine provisioning server. A virtual machine is provisioned and instantiated according to the information contained in the scenario definition file. A virtual machine is instantiated upon a host machine. Upon instantiation, a virtual machine communicates with a custom action service to execute an action upon the virtual machine. | 2015-07-09 |
20150193259 | BOOSTING THE OPERATING POINT OF A PROCESSING DEVICE FOR NEW USER ACTIVITIES - A processing system detects user activities on one or more processing units. In response, an operating point (operating frequency or an operating voltage) of the processing unit handing the user activity is increased at the processing unit. Battery power may be conserved in some processing systems by limiting the increase in the operating point to a time interval and reducing the operating frequency or the operating voltage to a previous value after the time interval has elapsed. | 2015-07-09 |
20150193260 | Executing A Gather Operation On A Parallel Computer That Includes A Plurality Of Compute Nodes - Executing a gather operation on a parallel computer that includes a plurality of compute nodes, including: dividing, by each task in an operational group of tasks, a send buffer containing contribution data into a plurality of chunks of data, each chunk of data located at an offset within the send buffer; sending, by each task in the operational group of tasks, one chunk of data to a root task through a data communications thread for each chunk of data; receiving the chunks of data by the root task; and storing, by the root task, each chunk of data in a receive buffer of the root task in dependence upon the offset of each chunk of data within the send buffer. | 2015-07-09 |
20150193261 | Administering Message Acknowledgements In A Parallel Computer - Administering message acknowledgements in a parallel computer that includes compute nodes, with each compute node including a processor and a messaging accelerator, includes: storing in a list, by a processor of a compute node, a message descriptor describing a message and an acknowledgement request descriptor describing a request for an acknowledgement of receipt of the message; processing, by a messaging accelerator of the compute node, the list, including transmitting, to a target compute node, the message described by the message descriptor and transmitting, to the target compute node, the request described by the acknowledgement request descriptor; receiving, by the messaging accelerator from the target compute node, an acknowledgement of receipt of the message, including notifying the processor of receipt of the acknowledgement; and removing, by the processor from the list, the message descriptor and the acknowledgment request descriptor. | 2015-07-09 |
20150193262 | CONSTRUCTING A LOGICAL TREE TOPOLOGY IN A PARALLEL COMPUTER - Constructing a logical tree topology in a parallel computer that includes compute nodes, where each compute node includes a hardware acceleration unit and executes an identical number of tasks and the tasks of each node have a rank, includes: creating hardware acceleration groups, with each hardware acceleration group including one task from each node, where the one task from each node has the same rank; assigning one task of a root compute node as a global root of the logical tree topology; assigning tasks of the root compute node other than the global root as local children of the global root; and assigning each of the global root and local children of the root compute node as a root of a subtree of tasks, wherein each subtree comprises the tasks of a hardware acceleration group. | 2015-07-09 |
20150193263 | Transaction Performance Monitoring - Aspects of the present disclosure provide systems and methods directed toward monitoring transactions between computing resources of a computing system. A transaction agent may monitor transactions between computing resources and generate transaction log information corresponding to the transactions. A current transaction rate for a current time period and a previous transaction rate for a previous time period may be automatically obtained based on transaction log information. The current transaction rate may be compared to the previous transaction rate, and a transaction rate alert may be generated responsive to determining that the previous transaction rate exceeds the current transaction rate. A current and previous transaction latency average may also be obtained based on transaction time information of the transaction log information. The transaction latency averages may be compared, and a transaction latency alert may be generated responsive to determining that the current transaction latency average exceeds the previous transaction latency average. | 2015-07-09 |
20150193264 | COMBINING SCALABILITY ACROSS MULTIPLE RESOURCES IN A TRANSACTION PROCESSING SYSTEM HAVING GLOBAL SERIALIZABILITY - There is disclosed a method and system for processing transactions requested by an application in a distributed computer system. The computer system includes at least one resource comprising a plurality of storage areas each with an associated resource manager, or a plurality of resources each comprising at least one storage area with an associated resource manager, the storage areas holding the same tables as each other. There is also provided a transaction manager that is linked, by way of either a network or a local application programming interface (API), to each of the resource managers, the transaction manager being configured to coordinate transaction prepare and commit cycles. The application requests operations on the resource by way of an interface; and a dispatch function directs transactions from the application to the appropriate storage areas on the basis of the content of the tables in the resource managers, in such a way that any given transaction is routed only to the storage areas containing entries upon which the transaction operates, allowing another transaction operating on different entries to be routed concurrently in parallel to other storage areas. A safe timestamp manager is provided to allocate new timestamps for committing transactions when such transactions access more than one resource storage area at the same time. | 2015-07-09 |
20150193265 | USING NONSPECULATIVE OPERATIONS FOR LOCK ELISION - A method includes identifying a set of instructions to be executed as a transaction that is to access a section of memory, prior to executing the set of instructions as the transaction, facilitating a non-speculative access to a data cache, the data cache comprising a plurality of cache lines, each cache line comprising a lock to lock a respective portion of the memory, determining if the section of memory is available for the transaction in view of locks of the plurality of cache lines, and in response to a determination that the section of memory is not available, causing the non-speculative access to the data cache to be repeated. | 2015-07-09 |
20150193266 | TRANSACTIONAL MEMORY HAVING LOCAL CAM AND NFA RESOURCES - An automaton hardware engine employs a transition table organized into 2 | 2015-07-09 |
20150193267 | SYSTEMS AND METHODS FOR A SAVE BACK FEATURE - Systems and methods for a save back feature are provided. In a restricted operating system environment, for example, a first application may be enabled to receive information back from a second application to which original information was sent from the first application. | 2015-07-09 |
20150193268 | FILE LOCK AND UNLOCK MECHANISM - A system and a method are disclosed for managing file locks, including initiating, by a processing device executing a kernel, executions of a number of active tasks that each has acquired a respective lock to a record, and in response to release of a first lock to the record by an active task, waking up a previously-designated worker task out of a number of idle tasks, in which the worker task is to attempt an acquisition of a second lock on behalf of at least one remaining task of the idle tasks. | 2015-07-09 |
20150193269 | EXECUTING AN ALL-TO-ALLV OPERATION ON A PARALLEL COMPUTER THAT INCLUDES A PLURALITY OF COMPUTE NODES - Executing an all-to-allv operation on a parallel computer that includes a plurality of compute nodes, including: packing, by each task in an operational group of tasks, vectored contribution data from vectored storage in an all-to-allv contribution data buffer into an all-to-all contribution data buffer, wherein two or more entries in the all-to-allv contribution data buffer are different in size and each entry in the all-to-all contribution data buffer is identical in size; executing with the contribution data as stored in the all-to-all contribution data buffer an all-to-all collective operation by the operational group of tasks; and unpacking, by each task in the operational group of tasks, received contribution data from the all-to-all contribution data buffer into the vectored storage in an all-to-allv contribution data buffer. | 2015-07-09 |
20150193270 | CONSTRUCTING A LOGICAL TREE TOPOLOGY IN A PARALLEL COMPUTER - Constructing a logical tree topology in a parallel computer that includes compute nodes, where each compute node includes a hardware acceleration unit and executes an identical number of tasks and the tasks of each node have a rank, includes: creating hardware acceleration groups, with each hardware acceleration group including one task from each node, where the one task from each node has the same rank; assigning one task of a root compute node as a global root of the logical tree topology; assigning tasks of the root compute node other than the global root as local children of the global root; and assigning each of the global root and local children of the root compute node as a root of a subtree of tasks, wherein each subtree comprises the tasks of a hardware acceleration group. | 2015-07-09 |
20150193271 | Executing An All-To-Allv Operation On A Parallel Computer That Includes A Plurality Of Compute Nodes - Executing an all-to-allv operation on a parallel computer that includes a plurality of compute nodes, including: packing, by each task in an operational group of tasks, vectored contribution data from vectored storage in an all-to-allv contribution data buffer into an all-to-all contribution data buffer, wherein two or more entries in the all-to-allv contribution data buffer are different in size and each entry in the all-to-all contribution data buffer is identical in size; executing with the contribution data as stored in the all-to-all contribution data buffer an all-to-all collective operation by the operational group of tasks; and unpacking, by each task in the operational group of tasks, received contribution data from the all-to-all contribution data buffer into the vectored storage in an all-to-allv contribution data buffer. | 2015-07-09 |
20150193272 | SYSTEM AND PROCESSOR THAT INCLUDE AN IMPLEMENTATION OF DECOUPLED PIPELINES - A system and apparatus are provided that include an implementation for decoupled pipelines. The apparatus includes a scheduler configured to issue instructions to one or more functional units and a functional unit coupled to a queue having a number of slots for storing instructions. The instructions issued to the functional unit are stored in the queue until the functional unit is available to process the instructions. | 2015-07-09 |
20150193273 | LATE CONSTRAINT MANAGEMENT - A method and system for integrating restrictions in an identity management system is provided. The method includes generating a role/account attribute table storage from static and dynamic rule defined values. A role request for a first role associated with a user is received and a set of attributes comprising a result of the role request are calculated. The set of attributes are transmitted to a target system for evaluation and a result is received. | 2015-07-09 |
20150193274 | DATA SHUFFLING IN A NON-UNIFORM MEMORY ACCESS DEVICE - A method of orchestrated shuffling of data in a non-uniform memory access device that includes a plurality of processing nodes includes running an application on a plurality of threads executing on the plurality of processing nodes and identifying data to be shuffled from source threads running on source processing nodes among the processing nodes to target threads executing on target processing nodes among the processing nodes. The method further includes generating a plan for orchestrating the shuffling of the data among the all of the memory devices associated with the threads and shuffling the data among all of the memory devices based on the plan. | 2015-07-09 |
20150193275 | BUILDING INTERACTIVE, DATA DRIVEN APPS - A method may be practiced in a computing environment including a first data processing system and a second data processing system. The method includes acts for rendering, on the second data processing system, a result derived from a set of data by performing data processing across the first data processing system and the second data processing system where the amount of processing performed by the first data processing system and the second data processing system can be dynamically adjusted depending on the capabilities of the second data processing system or factors affecting the second data processing system. | 2015-07-09 |
20150193276 | DYNAMICALLY MODIFYING PROGRAM EXECUTION CAPACITY - Techniques are described for managing program execution capacity, such as for a group of computing nodes that are provided for executing one or more programs for a user. In some situations, dynamic program execution capacity modifications for a computing node group that is in use may be performed periodically or otherwise in a recurrent manner, such as to aggregate multiple modifications that are requested or otherwise determined to be made during a period of time, and with the aggregation of multiple determined modifications being able to be performed in various manners. Modifications may be requested or otherwise determined in various manners, including based on dynamic instructions specified by the user, and on satisfaction of triggers that are previously defined by the user. In some situations, the techniques are used in conjunction with a fee-based program execution service that executes multiple programs on behalf of multiple users of the service. | 2015-07-09 |
20150193277 | ADMINISTERING A LOCK FOR RESOURCES IN A DISTRIBUTED COMPUTING ENVIRONMENT - In a distributed computing environment that includes compute nodes, where the compute nodes execute a plurality of tasks, a lock for resources may be administered. Administering the lock may be carried out by requesting, in an atomic operation by a requesting task, the lock, including: determining, by the requesting task, whether the lock is available; if the lock is available, obtaining the lock; and if the lock is unavailable, joining, by the requesting task, a queue of tasks waiting for availability of the lock. | 2015-07-09 |
20150193278 | ADMINISTERING A LOCK FOR RESOURCES IN A DISTRIBUTED COMPUTING ENVIRONMENT - In a distributed computing environment that includes compute nodes, where the compute nodes execute a plurality of tasks, a lock for resources may be administered. Administering the lock may be carried out by requesting, in an atomic operation by a requesting task, the lock, including: determining, by the requesting task, whether the lock is available; if the lock is available, obtaining the lock; and if the lock is unavailable, joining, by the requesting task, a queue of tasks waiting for availability of the lock. | 2015-07-09 |
20150193279 | Data Engine - Systems and methods for processing and/or presenting data are disclosed. In an aspect, one method can comprise receiving a request for information and detecting a type of data representing the information requested. The data can be processed via a type-dependent agent and the processed data can be provided via an agnostic data engine. | 2015-07-09 |
20150193280 | METHOD AND DEVICE FOR MONITORING API FUNCTION SCHEDULING IN MOBILE TERMINAL - Provided in embodiments of the present invention are a method and device for monitoring API function scheduling in a mobile terminal. The method comprises: preconfiguring at least one to-be-monitored API function and a response event corresponding to the at least one to-be-monitored API function; configuring one monitoring processing module on the basis of the at least to-be-monitored API function; acquiring in real-time current listening data outputted by a transmission function listening module; and, when the current listening data satisfies the response event, the monitoring processing module performing a monitoring processing corresponding to the response event. | 2015-07-09 |
20150193281 | ADMINISTERING INCOMPLETE DATA COMMUNICATIONS MESSAGES IN A PARALLEL COMPUTER - Administering incomplete data communications messages in a parallel computer that includes a plurality of compute nodes, with each compute node including a processor and a messaging accelerator, includes: transmitting, by a source messaging accelerator to a destination messaging accelerator, a message, including processing a messaging descriptor describing the message and setting, in the message descriptor, a flag indicating the message has been sent; transmitting, by the source messaging accelerator to a destination messaging accelerator responsive to processing an acknowledgement request descriptor corresponding to the message, a request for acknowledgment of receipt of the message; receiving, by the source messaging accelerator from the destination messaging accelerator, a negative acknowledgment (NACK) indicating that the message was not received at the destination messaging accelerator; and clearing, by the source messaging accelerator in the message descriptor, the flag indicating that message has been sent. | 2015-07-09 |
20150193282 | ADMINISTERING INCOMPLETE DATA COMMUNICATIONS MESSAGES IN A PARALLEL COMPUTER - Administering incomplete data communications messages in a parallel computer that includes a plurality of compute nodes, with each compute node including a processor and a messaging accelerator, includes: transmitting, by a source messaging accelerator to a destination messaging accelerator, a message, including processing a messaging descriptor describing the message and setting, in the message descriptor, a flag indicating the message has been sent; transmitting, by the source messaging accelerator to a destination messaging accelerator responsive to processing an acknowledgement request descriptor corresponding to the message, a request for acknowledgment of receipt of the message; receiving, by the source messaging accelerator from the destination messaging accelerator, a negative acknowledgment (NACK) indicating that the message was not received at the destination messaging accelerator; and clearing, by the source messaging accelerator in the message descriptor, the flag indicating that message has been sent. | 2015-07-09 |
20150193283 | EXECUTING A GATHER OPERATION ON A PARALLEL COMPUTER THAT INCLUDES A PLURALITY OF COMPUTE NODES - Executing a gather operation on a parallel computer that includes a plurality of compute nodes, including: dividing, by each task in an operational group of tasks, a send buffer containing contribution data into a plurality of chunks of data, each chunk of data located at an offset within the send buffer; sending, by each task in the operational group of tasks, one chunk of data to a root task through a data communications thread for each chunk of data; receiving the chunks of data by the root task; and storing, by the root task, each chunk of data in a receive buffer of the root task in dependence upon the offset of each chunk of data within the send buffer. | 2015-07-09 |
20150193284 | HOST/HOSTED HYBRID APPS IN MULTI-OPERATING SYSTEM MOBILE AND OTHER COMPUTING DEVICES - According to further aspects of the invention, there is provided a computing device that executes a hybrid application in a single application address space established within a runtime environment defined under a native operating system executing on the device. That hybrid application includes (i) instructions comprising a “hosted” software application built and intended for execution under an operating system that differs from the native operating system, i.e., a hosted operating system, and (ii) instructions from at least one of a runtime library and another resource of the native runtime environment. | 2015-07-09 |
20150193285 | HOSTED APP INTEGRATION SERVICES IN MULTI-OPERATING SYSTEM MOBILE AND OTHER COMPUTING DEVICES - The invention provides, in some aspects, a computing device that includes a central processing unit that is coupled to a hardware interface and that executes a native operating system including one or more native runtime environments within which native software applications are executing. A first native software application executing within the one or more native runtime environments defines one or more hosted runtime environments within which hosted software applications are executing. One or more further native software applications (“IO proxies”), each executing within the one or more native runtime environments and each corresponding to a respective one of the one or more hosted software applications, receives the graphics generated by the respective hosted software application and effects writing of those graphics to the video frame buffer for presentation on the display of the computing device. | 2015-07-09 |
20150193286 | ASYNCHRONOUS MESSAGE PASSING - This specification describes technologies relating to software execution. A computing device includes a processor. An operating system includes an execution environment in which applications can execute computer-specific commands. A web-browser application includes a scripting environment for interpreting scripted modules. The web-browser application further includes a native environment in which native modules can execute computer-specific commands. The web-browser application further includes an interface between the scripting environment and the native environment. The interface includes functions to asynchronously pass data objects by value, from one of the scripting environment and the native environment, to the other of the scripting environment and the native environment. | 2015-07-09 |
20150193287 | BUS INTERFACE OPTIMIZATION BY SELECTING BIT-LANES HAVING BEST PERFORMANCE MARGINS - A bus interface selects bit-lanes to allocate as spares by testing the performance margins of individual bit-lanes during initialization or calibration of the bus interface. The performance margins of the individual bit-lanes are evaluated as the operating frequency of the interface is increased until a number of remaining bit-lanes that meet specified performance margins is equal to the required width of the interface. The bit-lanes that do not meet the required performance margins are allocated as spares and the interface can be operated at the highest evaluated operating frequency. When an operating bit-lane fails, one of the spare bit-lanes is allocated as a replacement bit-lane and the interface operating frequency is reduced to a frequency at which the new set of operating bit-lanes meets the performance margins. The operating frequency of the interface can be dynamically increased and decreased during operation and the performance margins evaluated to optimize performance. | 2015-07-09 |
20150193288 | Precursor Adaptation Algorithm for Asynchronously Clocked SERDES - A system may include one or more high-speed serial interfaces for moving data. A system may include a transmission unit configured to serially transmit data bits, and a receiving unit coupled to the transmission unit. The receiving unit may receive a stream of data bits from the transmission unit and establish an initial sample point. The receiving unit may then sample the bits at multiple offsets from the initial sample point, reestablishing the initial sample point between each offset. The receiving unit may also calculate bit error rates (BERs) for the samples taken at each sample point. Based on the BERs, the receiving unit may set a data sampling point for receiving a second stream of data bits from the transmitter unit. The receiving unit may limit the amount of time the data sampling point is used and recalculate the data sampling point when the amount of time has expired. | 2015-07-09 |
20150193289 | EFFICIENT DATA SYSTEM ERROR RECOVERY - Dynamically adjust an error threshold in a data system based system status changes caused by either an external environment and/or an internal status. | 2015-07-09 |
20150193290 | Method And System For Constructing Component Fault Tree Based On Physics Of Failure - A method and system for constructing component fault tree based on physics of failure are disclosed. The method includes the steps of: establishing, based on common characteristics of component physics of failure and according to six layers based on physics of failure and category of the component, a fault information database containing information of the six layers based on physics of failure; constructing, based on the fault information database and according to the six layers based on physics of failure and logical relationship of physics of failure, a component fault tree of n levels of events of six layers based on physics of failure; and simplifying the fault tree by means of failure mechanism sub-tree transferring and fault module sub-tree importing. The method and system are applicable to construction of fault tree of various components. | 2015-07-09 |
20150193291 | ERROR HANDLING METHOD, MEMORY STORAGE DEVICE AND MEMORY CONTROLLING CIRCUIT UNIT - An error handling method, a memory storage device and a memory controlling circuit unit are provided. The method includes obtaining a finished event corresponding to a channel; determining whether the finished event is a failed event, if the finished event is the failed event; stopping an operation of the channel and performing a first update operation on a counting value corresponding to the channel; and if the finished event is not the failed event, keeping the counting value corresponding to the channel unchanged and processing the finished event. The step of the processing the finished event includes performing a second update operation on the counting value corresponding to the channel if the finished event is the failed event, and recovering the operation of the channel if the counting value matches a threshold criterion. Accordingly, it can improve the accessing performance. | 2015-07-09 |
20150193292 | MICROCONTROLLER DEVICE AND CONTROLLING METHOD PERFORMED THEREIN - In one aspect, the present disclosure provides a microcontroller device that has, in one chip: a central processing unit; a plurality of peripheral circuits configured to execute respective prescribed processes in response to corresponding trigger signals; and a peripheral control unit that controls respective activations of the plurality of peripheral circuits, wherein at least one of the peripheral circuits is configured to: control operation of an external device; determine whether or not the operation of the external device has ended without an error; enter a standby mode to accept a next trigger signal when the operation of the external device ended without an error; and generate an interrupt signal to interrupt the central processing unit when the operation of the external device ended with an error. | 2015-07-09 |
20150193293 | ADVANCED PROGRAMMING VERIFICATION SCHEMES FOR MEMORY CELLS - A method for data storage includes receiving in a memory device data for storage in a group of memory cells. The data is stored in the group by performing a Program and Verify (P&V) process, which applies to the memory cells in the group a sequence of programming pulses and compares respective analog values of the memory cells in the group to respective verification thresholds. Immediately following successful completion of the P&V process, a mismatch between the stored data and the received data is detected in the memory device. An error in storage of the data is reported responsively to the mismatch. | 2015-07-09 |
20150193294 | OPTIMIZING APPLICATION AVAILABILITY - An approach to an optimal application configuration. The approach includes a method that includes computing, by at least one computing device, an actual application impact based on an “N” number of failing information technology (IT) infrastructure components within an application architecture. The method includes determining, by the at least one computing device, a factor in likelihood of failure of the “N” number of IT infrastructure components. The method includes determining, by the at least one computing device, a failure profile for the application architecture based on the actual application impact and the factor in likelihood of failure. | 2015-07-09 |
20150193295 | DETERMINING A NUMBER OF UNIQUE INCIDENTS IN A PLURALITY OF INCIDENTS FOR INCIDENT PROCESSING IN A DISTRIBUTED PROCESSING SYSTEM - Methods, apparatuses, and computer program products for determining a number of unique incidents in a plurality of incidents for incident processing in a distributed processing system are provided. Embodiments include an incident analyzer identifying within the plurality of incidents, attribute combination entries of location identifications and incident types and analyzing each location identification in each attribute combination entry according to a sequence of the attribute combination entries including creating attribute pairs. The incident analyzer is also configured to count the attribute pairs. The number of attribute pairs is the number of unique incidents in the plurality of incidents. | 2015-07-09 |
20150193296 | RUN-TIME ERROR REPAIRING METHOD, DEVICE AND SYSTEM - Various embodiments of the present disclosure describe a method, device, and system for repairing run-time errors. The method includes at a client side, obtaining dump file information and version information of an application where a run-time error occurs; calculating the obtained dump file information and version information according to a preset algorithm to get an error identification associated with the run-time error; sending an error report carrying the error identification to an error information acquisition server; receiving a repair application issued by the error information acquisition server according to the error identification; and activating the repair application to perform repairing. When embodiments of the present disclosure are employed, the time required for repairing application run-time errors can be reduced. | 2015-07-09 |
20150193297 | READ TECHNIQUE FOR A BUS INTERFACE SYSTEM - Embodiments of a bus interface system are disclosed. The bus interface system includes a master bus controller and a slave bus controller coupled to a bus line. The master bus controller and the slave bus controller are configured to perform read operations using error codes and error checks. For example, the error codes may be cyclic redundancy codes (CRC). In this manner, accuracy is ensured during communications between the slave bus controller and the master bus controller. | 2015-07-09 |
20150193298 | WRITE TECHNIQUE FOR A BUS INTERFACE SYSTEM - Embodiments of a bus interface system are disclosed. In one embodiment, the bus interface system includes a master bus controller and a slave bus controller coupled to a bus line. The master bus controller is configured to generate a first set of data pulses along the bus line representing a payload segment. The slave bus controller is configured to decode the first set of data pulses representing the payload segment into a decoded payload segment. The slave bus controller is then configured to perform a first error check on the decoded payload segment. Furthermore, the slave bus controller is configured to generate an acknowledgment signal along the bus line so that the acknowledgement signal indicates that the decoded payload segment passed the first error check. In this manner, the master bus controller can determine that the slave bus controller received an accurate copy of the payload segment. | 2015-07-09 |
20150193299 | SELECTIVE COPYBACK FOR ON DIE BUFFERED NON-VOLATILE MEMORY - Apparatuses, systems, methods, and computer program products are disclosed for on die buffered non-volatile memory management. A method includes storing data in a first set of non-volatile memory cells. A method includes determining one or more attributes associated with data. A method includes determining whether to store data in a second set of non-volatile memory cells based on one or more attributes. A second set of non-volatile memory cells may be configured to store more bits per cell than a first set of non-volatile memory cells. | 2015-07-09 |
20150193300 | REDUNDANT DATA STORAGE SCHEMES FOR MULTI-DIE MEMORY SYSTEMS - A method for data storage includes storing data in a memory that includes one or more memory units, each memory unit including memory blocks. The stored data is compacted by copying at least a portion of the data from a first memory block to a second memory block, and subsequently erasing the first memory block. Upon detecting a failure in the second memory block after copying the portion of the data and before erasure of the first memory block, the portion of the data is recovered by reading the portion from the first memory block. | 2015-07-09 |
20150193301 | MEMORY CONTROLLER AND MEMORY SYSTEM - A controller according to one embodiment controls a memory, the memory including blocks and configured to erase data in the blocks with each of the blocks as a minimum unit. Each of the blocks includes unit memory areas each specified by an address. The controller is configured to add a code for error correction to received data to generate a data unit, divide the data unit into data unit sections, and write the data unit sections in unit memory areas of respective blocks, the unit memory areas having different addresses. | 2015-07-09 |
20150193302 | SELECTIVE ECC REFRESH FOR ON DIE BUFFERED NON-VOLATILE MEMORY - Apparatuses, systems, methods, and computer program products are disclosed for on die buffered non-volatile memory management. A method includes storing data in a first set of non-volatile memory cells. A method includes determining whether to perform an error-correcting code (ECC) refresh for data to be copied from a first set of non-volatile memory cells to a second set of non-volatile memory cells based on one or more attributes associated with the data. A method includes storing data in a second set of non-volatile storage cells representing data using more storage cells per cell than a first set of non-volatile storage cells. | 2015-07-09 |
20150193303 | RECONSTRUCTIVE ERROR RECOVERY PROCEDURE (ERP) USING RESERVED BUFFER - In one embodiment, a method for assembling data from a medium includes reading a data set from the medium repeatedly using different settings until either: a reconstructed data set is obtained, or a maximum number of rereads has been reached, the data set including a plurality of sub data sets, each sub data set having a plurality of rows, and after each reread of the data set, good rows of data are stored to iteratively construct a good data set from a plurality of good rows as determined by C1 and/or C2 error correction code (ECC). | 2015-07-09 |
20150193304 | SINGLE AND MULTI-CUT AND PASTE (C/P) RECONSTRUCTIVE ERROR RECOVERY PROCEDURE (ERP) USING HISTORY OF ERROR CORRECTION - In one embodiment, an apparatus for reading data from a data storage medium includes a processor and logic integrated with and/or executable by the processor, the logic being configured to: read data from a data storage medium, the data including a plurality of data sets, determine that an error condition is detected for a data set read from the data storage medium, determine whether the data set was read from the data storage medium using multiple cut and paste (C/P) error recovery procedure (ERP) (C/P ERP Multi), and when the data set was read from the data storage medium using C/P ERP Multi: continue reading data from the data storage medium normally when the detected error condition has been overcome using C/P ERP Multi; otherwise, continue using C/P ERP Multi to read data from the data storage medium until the error condition is overcome. | 2015-07-09 |
20150193305 | METHOD AND DEVICE FOR AUTO RECOVERY STORAGE OF JBOD ARRAY - A method for auto recovery storage of JBOD array is disclosed. The method includes: determining whether a disk of the JBOD array is failed, deleting a storage resource according to a storage resource list for the failed disk is stored with an index area upon determining the failed disk, updating the index area corresponding to a recorded data area when the failed disk is stored with the storage resource of the recorded data area instead of the index area; transmitting control instructions indicative of adding a hot spare to the JBOD array to add the hot spare to the JBOD array, and adding the storage resource and activating the storage resource for the failed disk stored with the index area after the hot spare is added to the JBOD array. | 2015-07-09 |
20150193306 | QUANTUM COMMUNICATION DEVICE, QUANTUM COMMUNICATION METHOD, AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a quantum communication device includes a sift processor, an estimator, a determination unit, and a corrector. The sift processor is configured to acquire sift processing data by referring to a cryptographic key bit string in a predetermined bit string with a reference basis randomly selected from a plurality of bases via a quantum communication channel. The estimator is configured to acquire an estimated error rate of the sift processing data. The determination unit is configured to determine order of the sift processing data in which an error is to be corrected based on the estimated error rate and difference data between a processing speed of error correcting processing and a processing speed of privacy amplification processing. The corrector is configured to acquire one piece of the sift processing data in the order determined by the determination unit, and generate error correcting processing data. | 2015-07-09 |
20150193307 | DIFFERENTIAL SIGNAL REVERSION AND CORRECTION CIRCUIT AND METHOD THEREOF - A differential signal reversion and correction circuit and a method thereof are provided. The structures of the circuit include: a data frame sending module, when the link conditions are detected, the data frame sending module generates specific logic sequence and finishes the sending by a input/output port, such that a receiving side receives, processes and analyzes the sequence, and determination of link transmission conditions are achieved; a comparator of the receiving side, which receives sequence data, performs corresponding comparing, checking and feedback controlling, thereby achieving link detection and differential correction purpose; a reversion control signal generating module, which receives a comparison result of the comparator, generates corresponding control signal, and controls the link whether to perform reversion operation. | 2015-07-09 |
20150193308 | MEMORY CHIPS AND DATA PROTECTION METHODS - A memory chip coupled to a host includes a memory and a controller. Multiple boot images having the same content are pre-loaded in the memory. The controller is coupled to the memory for processing data transmission between the memory chip and the host. The controller further determines whether the memory chip enters a boot mode for the first time. When the memory chip enters the boot mode for the first time, the controller accesses the memory so as to obtain a correct boot image from the boot images and transmits the correct boot image to the host. | 2015-07-09 |
20150193309 | CONFIGURING STORAGE RESOURCES OF A DISPERSED STORAGE NETWORK - A method begins by a processing module of a dispersed storage network (DSN) ascertaining a decode threshold value for dispersed storage error encoding data for storage in storage units of the DSN. The method continues with the processing module determining a total width value for the dispersed storage error encoding based on the decode threshold value, a number of selected sites within the DSN, and a number of selected storage units of the selected sites. The method continues with the processing module determining logical storage slots within the selected storage units based on the total width value, the number of selected, and the number of selected storage units. The method continues with the processing module writing a set of encoded data slices to a total width value of the logical storage slots within at least some of the selected storage units of the selected sites based on a slice-to-slot mapping. | 2015-07-09 |
20150193310 | EFFICIENT BACKUP REPLICATION - A system for backup replication comprises a processor and a memory. The processor is configured to determine data present in a most recent backup not present in a previous backup; transmit an extent specification; and transmit data segment fingerprints of the one or more data segments. The memory is coupled to the processor and is configured to provide the processor with instructions. | 2015-07-09 |
20150193311 | MANAGING PRODUCTION DATA - Various embodiments for managing production data are described herein. In one example of a method for managing production data, the method can include allocating, via a processor, a first storage area to store production data for an external computing device. The method can also include receiving a write request comprising production data to be stored in the first storage area. In addition, the method can include detecting that the first storage area does not have available space to store the production data and allocating, via a processor, a second storage area to store the production data. Furthermore, the method can include transferring, via a processor, production data stored in the first storage area to a backup device. | 2015-07-09 |
20150193312 | SELECTING A RESOURCE TO BE USED IN A DATA BACKUP OR RESTORE OPERATION - Techniques for selecting a resource to be used in a data backup or restore operation are described in various implementations. An example method that implements the techniques may include determining, using a computing system, diagnostic information associated with a plurality of candidate resources that are available for use in a data backup or restore operation. The method may also include selecting, using the computing system, a recommended resource from among the plurality of candidate resources, the recommended resource being selected based at least in part on the diagnostic information. The method may also include causing the data backup or restore operation to be performed using the recommended resource. | 2015-07-09 |
20150193313 | DATA TRANSFER AND RECOVERY - A backup image generator can create a primary image and periodic delta images of all or part of a primary server. The images can be sent to a network attached storage device and one or more remote storage servers. In the event of a failure of the primary server, an updated primary image may be used to provide an up-to-date version of the primary system at a backup or other system. As a result, the primary data storage may be timely backed-up, recovered and restored with the possibility of providing server and business continuity in the event of a failure. | 2015-07-09 |
20150193314 | MANAGING PHYSICAL RESOURCES OF A STORAGE SYSTEM - A method for managing physical resources of a storage system, the method may include transmitting, to a remote site, first information representative of a first snapshot of a logical entity; wherein the first snapshot is associated with first data that is stored in first physical addresses of the storage system; wherein the first physical addresses are mapped to first logical addresses; receiving from the remote site a first acknowledgment indicating that the first information was fully received by the remote site; and disassociating, in response to a reception of the first acknowledgement, the first snapshot from the first physical addresses while maintaining a logical association between the first snapshot and the first logical addresses. | 2015-07-09 |
20150193315 | METHOD AND DEVICE FOR MANAGING MULTIPLE SNAPSHOTS OF DATA STRORAGE DEVICE - A method and device for managing multiple snapshots of data storage device is provided. A method of backing up multiple snapshots includes determining whether to perform a copy on write (COW) operation when a write or update operation is performed on a data block of the storage medium, backing up original data of the data block of recording the original data in a snapshot storage location when it is determined to perform the COW operation, and recording snapshot mapping information of recording a time and a physical address (PA) in which the original data is recorded in the snapshot storage location in a linked list (LL) corresponding to a logical address (LA) of the data block. | 2015-07-09 |
20150193316 | BUS INTERFACE OPTIMIZATION BY SELECTING BIT-LANES HAVING BEST PERFORMANCE MARGINS - A bus interface selects bit-lanes to allocate as spares by testing the performance margins of individual bit-lanes during initialization or calibration of the bus interface. The performance margins of the individual bit-lanes are evaluated as the operating frequency of the interface is increased until a number of remaining bit-lanes that meet specified performance margins is equal to the required width of the interface. The bit-lanes that do not meet the required performance margins are allocated as spares and the interface can be operated at the highest evaluated operating frequency. When an operating bit-lane fails, one of the spare bit-lanes is allocated as a replacement bit-lane and the interface operating frequency is reduced to a frequency at which the new set of operating bit-lanes meets the performance margins. The operating frequency of the interface can be dynamically increased and decreased during operation and the performance margins evaluated to optimize performance. | 2015-07-09 |
20150193317 | RECOVERY OF A NETWORK INFRASTRUCTURE TO FACILITATE BUSINESS CONTINUITY - Methods and systems for disaster recovery of a network infrastructure to facilitate business continuity. A method including capturing, by at least one computer device, data and ecology information about an entire existing network infrastructure. The method further including generating, by the at least one computer device, a generalized descriptive language for the captured data and ecology information. The method further including reconstructing, by the at least one computer device, the entire existing network infrastructure by introducing functionally equivalent components that correspond to the generalized descriptive language. | 2015-07-09 |
20150193318 | MIRRORING DEVICE HAVING GOOD FAULT TOLERANCE AND CONTROL METHOD THEREOF, AND STORAGE MEDIUM - A mirroring device that can improve, even when two storage devices to which an upper limit is set for the number of rewrites of data are used, the fault tolerance of the mirroring device while preventing one of the storage devices from reaching the lifetime thereof early. A mirroring device comprises two storage devices to which an upper limit is set for the number of rewrites of data. Remaining writable amounts of the data in the storage devices are acquired respectively from total amounts of the data written in the respective storage devices. When it is determined that a difference between the respective acquired remaining writable amounts is less than a predetermined value, the respective storage devices are controlled such that the difference becomes equal to or more than the predetermined value. | 2015-07-09 |
20150193319 | METHOD AND A COMPUTING SYSTEM ALLOWING A METHOD OF INJECTING HARDWARE FAULTS INTO AN EXECUTING APPLICATION - A method of injecting hardware faults into execution of an application in a distributed computing system comprising hardware components including linked nodes, the method comprising: loading an enhanced software stack allowing faults to be injected by deactivating or degrading hardware components as a result of fault triggers; running a fault-trigger daemon on each of the nodes; providing the fault trigger for a degradation or deactivation using one of the daemons to trigger a layer of the software stack controlling a hardware component to inject a fault into the hardware component; and continuing execution of the application with the injected fault. | 2015-07-09 |
20150193320 | RACK MANAGEMENT SYSTEM AND RACK MANAGEMENT METHOD THEREOF - A rack management system and a rack management method thereof are disclosed; wherein the storage management system is used for managing a plurality of chassis. The storage management system includes a rack, a resistor cable, a power supply module, a detection module, and a processing module. The rack has a plurality of storage portions for disposing the plurality of chassis respectively. The resistor cable is disposed in the rack for corresponding to each storage space. The power supply module is used for supplying a power signal to the resistor cable. When the plurality of chassis is disposed in the plurality of storage portions, the detection module detects the resistor cable to generate a plurality of detection signals. The processing module records locations of the plurality of chassis based on the plurality of detection signals. | 2015-07-09 |
20150193321 | GROUP WRITE TECHNIQUE FOR A BUS INTERFACE SYSTEM - Embodiments of bus interface systems and methods of operating the same are disclosed. In one embodiment, a bus interface system includes a master bus controller and multiple slave bus controllers that are each coupled to a bus line. The master bus controller is configured to generate a first set of data pulses along the bus line representing a payload segment. Each of the slave bus controllers decodes the first set of data pulses along the bus line representing the payload segment and performs an error check. Each slave bus controller is then configured to generate an acknowledgement pulse along the bus line to indicate that the slave bus controller's particular error check was passed. In this manner, the bus interface system can perform a group write bus function and the master bus controller can determine that the multiple slave bus controllers each received an accurate copy of the payload segment. | 2015-07-09 |
20150193322 | Method and Apparatus for USB Signaling Via Intermediate Transports - According to one aspect of the teachings herein, a system includes first and second modules that respectively anchor host-side and device-side ends of an intermediate transport link that interconnects a USB host to a USB device. The system detects when the host activates an isochronous endpoint in the device for an isochronous IN data transaction, and the second module autonomously generates data requests for the device and forwards the isochronous data output from the device towards the first module. In turn, the first module buffers the data and provides it to the host in response to host's data requests. However, the first module blocks host requests from propagating to the device and it NACKs host requests until forwarded data is available from the second module. Such operation remains transparent to the host and device, while avoiding USB timing violations, even for extended intermediate transport links. | 2015-07-09 |
20150193323 | PROVIDING A USER INTERFACE TO ENABLE SELECTION OF STORAGE SYSTEMS, COMPONENTS WITHIN STORAGE SYSTEMS AND COMMON PERFORMANCE METRICS FOR GENERATING PERFORMANCE METRIC REPORTS ACROSS STORAGE SYSTEMS AND COMPONENTS - A storage system graphical user interface (GUI) renders indication of a plurality of selected storage systems. Selection is received of selected storage systems from the rendered indication of selected storage systems and a determination is made of performance metrics common to the selected storage systems. A performance metric GUI enabling selection of the determined performance metrics common to the selected storage systems is generated. In response to user selection of at least one selected performance metric of the determined performance metrics in the performance metric GUI, determination is made of performance metric values for the at least one of the selected performance metrics for the selected storage systems. A computer renderable visualization providing a visual comparison for each of the at least one selected performance metric of the determined performance metric values is generated for the selected storage systems. | 2015-07-09 |
20150193324 | Template Directories for Cartridges in a Multi-Tenant Platform-as-a-Service (PaaS) System - Implementations for template directories for cartridges in a multi-tenant Platform-as-a-Service (PaaS) system are disclosed. A method of the disclosure includes maintaining, by a node executed by a processing device, a cartridge library comprising cartridge packages that provide functionality for applications executed by the node for a multi-tenant Platform-as-a-Service (PaaS) system, embedding, by the node, a cartridge instance from the cartridge library in a gear of the node, providing, via the cartridge instance, a template directory to an application utilizing the cartridge instance on the node, and executing, by the node, a sample application from the template directory to demonstrate functionality of the cartridge instance to an application developer of the application. | 2015-07-09 |
20150193325 | METHOD AND SYSTEM FOR DETERMINING HARDWARE LIFE EXPECTANCY AND FAILURE PREVENTION - A method for determining and prolonging hardware life expectancy is provided. The method includes collecting data from a hardware component in a first computational device, creating a quantitative value representing the status of the hardware component, determining a lifetime of the hardware component, and providing an alert to the first computational device based on the determined lifetime of the hardware component. A system configured to perform the above method is also provided. A method for managing a plurality of hardware devices according to a hardware life expectancy includes accessing an application programming interface (API) to obtain status information of a hardware component in a computational device is also provided. The method includes balancing a load for a plurality of redundancy units in a redundancy system and determining a backup frequency for a plurality of backup units in a backup system. | 2015-07-09 |
20150193326 | METHOD AND APPARATUS FOR ERROR IDENTIFICATION AND DATA COLLECTION - A system includes a processor configured to receive identification of a vehicle-computing process use to track. The processor is also configured to initiate tracking when the identified process is requested and record success data relating to successful uses of the identified process. Further, the processor is configured to record failure data relating to failures resulting from uses of the identified process and report recorded data to a remote server. | 2015-07-09 |
20150193327 | CONNECTION CHECKING FOR HARDWIRED MONITORING SYSTEM - A system includes a controller, a hardwired network coupled to the controller, a plurality of devices coupled to the network via drops between connectors to the network and connectors to the devices, and wherein the controller performs performance monitoring of communications between the controller and the devices to identify devices which are improperly connected to the serial bus. | 2015-07-09 |
20150193328 | REMOTE DEBUG SERVICE IN A CLOUD ENVIRONMENT - A method provides a debug service in a network environment. One or more processors initiate a debug service as a remote shared service in the network environment. The debug service receives a call from a deployed workload process within a virtual machine in the network environment, and gathers required information for a debug session of the workload process, where the required information includes source code used by the workload process. One or more processors attach the debug service to the workload process to carry out the debug session, such that the debug service working with a debug agent at the workload process attaches to and debugs a virtual environment that obscures the virtual machine. | 2015-07-09 |