35th week of 2015 patent applcation highlights part 45 |
Patent application number | Title | Published |
20150242195 | INFORMATION PROCESSING APPARATUS, INSTALLATION METHOD, AND PROGRAM - An information processing apparatus is configured to install a driver that has not yet been customized, in response to the start of installation of the driver, and modifies the installed driver such that a function setting value settable through the installed driver becomes identical to a function setting value of a customized driver. | 2015-08-27 |
20150242196 | INSTALLATION OF SOFTWARE ONTO A COMPUTER - An indication is received from a user to initiate installation of an operating system onto a storage device of a computer. The storage device is partitioned into an original partition and a new partition. Installation software for the operating system is loaded onto the new partition. The computer is booted into the installation software on the new partition. The operating system is installed onto the original partition via the installation software on the new partition. The computer is then re-booted into the operating system on the original partition and the new partition is removed from the storage device. | 2015-08-27 |
20150242197 | Automatic Installing and Scaling of Application Resources in a Multi-Tenant Platform-as-a-Service (PaaS) System - A mechanism for automatic installing and scaling of application resources in a multi-tenant Platform-as-a-Service (PaaS) environment in a cloud computing system is disclosed. A method includes creating, by a processing device of an Infrastructure-as-a-Service (IaaS) platform, an image package corresponding to a node host on a multi-tenant Platform-as-a-Service (PaaS) system. The image package comprises an image file including a script file having a plurality of software updates and run time configuration files. The image package is stored in a storage memory of the IaaS platform and is accessible by a virtual machine (VM) instance. The method also includes retrieving, from the storage memory, the script file from the image package and causing a boot process of the VM instance to download the script file into the PaaS system. | 2015-08-27 |
20150242198 | SILENT IN-VEHICLE SOFTWARE UPDATES - A computer-implemented method includes receiving, from a cloud server by a vehicle, a manifest indicating network locations of software updates determined according to an interrogator log generated by the vehicle; installing update binaries retrieved from the network locations to an inactive installation of a plurality of storage installations; and setting the inactive installation to be an active installation upon vehicle restart, in place of another of the storage installation currently set as the active partition. | 2015-08-27 |
20150242199 | Deployment Optimization for Activation of Scaled Applications in a Multi-Tenant Platform-as-a-Service (PaaS) System - A mechanism for optimization of deployment of applications for activation in a multi-tenant Platform-as-Service (PaaS) system is disclosed. A method of the disclosure includes receiving, by a processing device, a request for deployment of an application source code on a node. The node is provided by the PaaS system. The method also includes implementing, by the processing device, a build, prepare and distribute functionality to convert the application source code into a build result prepared for distribution as a deployment artifact code. The method further includes implementing, by the processing device, a deployment functionality to activate the deployment artifact for the deployment in the node by not incurring downtime. | 2015-08-27 |
20150242200 | RE-CONFIGURATION IN CLOUD COMPUTING ENVIRONMENTS - There is provided a data structure for re-configuration of an application hosted in a cloud computing environment. The data structure comprises a software template for use in a software scheme. The software template describes a flow of actions executable by a cloud management unit in the cloud computing environment for re-configuration of an application hosted by the management unit and executable by the management unit using the software scheme. The software template comprises software instructions comprising a first portion of software instructions non-editable by a programming interface unit of the hosted application. The software template allows for a second portion of software instructions to be added to the software template by the programming interface unit of the hosted application. Methods and devices for re-configuration using the data structure are also provided. | 2015-08-27 |
20150242201 | METHOD FOR UPDATING FIRMWARE AND ELECTRONIC DEVICE THEREOF - An apparatus and a method for updating firmware of an internal device in an electronic device are provided. The method includes executing a control command for a firmware update of the internal device, identifying data for the firmware update of the internal device, and updating the firmware of the internal device by using the data. | 2015-08-27 |
20150242202 | METHOD OF UPDATING FIRMWARE OF MEMORY DEVICE INCLUDING MEMORY AND CONTROLLER - Provided is a method of updating firmware of a memory device including a memory and a controller driving firmware to control the memory. The method includes updating the firmware of the memory device by transmitting at least one normal command and an address corresponding to the at least one normal command to the memory device. The normal command is a command to issue a normal operation, and the normal operation is not an update operation of the firmware. | 2015-08-27 |
20150242203 | DETERMINING CHARACTER SEQUENCE DIGEST - Systems and methods for determining a character sequence digest. An example method may comprise: identifying, within a character sequence, one or more sections, wherein each section comprises a section header and one or more section bodies; performing the following operations for each identified section body: responsive to determining that the section body is not preceded by a section header, prepending a section header to the section body; calculating a section digest by applying a hash function to the section comprising the section header and the section body; and calculating a digest of the character sequence by applying a symmetric summing operation to one or more section digests. | 2015-08-27 |
20150242204 | METHODS AND APPARATUS TO GENERATE A CUSTOMIZED APPLICATION BLUEPRINT - Methods and apparatus to generate a customized application blueprint are disclosed. An example method includes determining a first computing unit within an application definition, identifying a property for the first computing unit, and generating an application blueprint based on the identified property of the computing unit. | 2015-08-27 |
20150242205 | Non-Transitory Computer-Readable Recording Medium Storing Application Development Support Program and Application Development Support System That Automatically Support Platform Version - An application development support system for creating an application by building build resources that contain libraries and manifest files includes a project version confirmation circuit, an updating circuit, and a manifest file creation circuit. The project version confirmation circuit searches all projects existing in a resources holding region to confirm the version in situations where a platform of an operation target for the application is in a plurality of versions. The updating circuit updates version-dependent files by replacing files that differ by version, stored in a development environment, with files appropriate to the project version. The manifest file creation circuit creates manifest files for what is different from an open-source interface, and merging the files with existing manifest files. | 2015-08-27 |
20150242206 | ADDING ON-THE-FLY COMMENTS TO CODE - A system and method of adding on-the-fly comments to source code are described. In some embodiments, audio data comprising a comment for source code in a source code file is received. The comment is stored in association with the source code, and an indication of the comment is caused to be displayed within the source code file to a user on a computing device. In some embodiments, an indication of a location within the source code file with which to associate the comment is received, and the comment can be stored in association with the location within the source code file. The comment can be caused to be displayed at the location within the source code file. In some embodiments, the audio data is converted to a textual representation of the comment. In some embodiments, the comment is translated from an original language to at least one additional language. | 2015-08-27 |
20150242207 | PROGRAM INFORMATION GENERATION SYSTEM, METHOD OF GENERATING PROGRAM INFORMATION, COMPUTER-READABLE PROGRAM PRODUCT, AND PROGRAM INFORMATION DISPLAY SYSTEM - In a system according to any one of embodiments, program structure information may include interval information. Each interval information may include source code position information indicating a successive region on a source code of a target program and parent-child information for specifying a parent-child relationship with respect to the interval information. The program structure information may include a reference interval without a parent. A processing unit may specify the number of parents existing between each interval information and the reference interval as a depth of each interval information from the reference interval, and create display information by arranging the interval information on a coordinate system defined by a first axis representing depth from the reference interval and a second axis representing the parent-child relationship based on the depth from the reference and the parent-child information. | 2015-08-27 |
20150242208 | HINT INSTRUCTION FOR MANAGING TRANSACTIONAL ABORTS IN TRANSACTIONAL MEMORY COMPUTING ENVIRONMENTS - When executed, a transaction-hint instruction specifies a transaction-count-to-completion (CTC) value for a transaction. The CTC value indicates how far a transaction is from completion. The CTC may be a number of instructions to completion or an amount of time to completion. The CTC value is adjusted as the transaction progresses. When a disruptive event associated with inducing transactional aborts, such as an interrupt or a conflicting memory access, is identified while processing the transaction, processing of the disruptive event is deferred if the adjusted CTC value satisfies deferral criteria. If the adjusted CTC value does not satisfy deferral criteria, the transaction is aborted and the disruptive event is processed. | 2015-08-27 |
20150242209 | PROCESSOR EFFICIENCY BY COMBINING WORKING AND ARCHITECTURAL REGISTER FILES - A processor includes an execution pipeline configured to execute instructions for threads, wherein the architectural state of a thread includes a set of register windows for the thread. The processor also includes a physical register file (PRF) containing both speculative and architectural versions of registers for each thread. When an instruction that writes to a destination register enters a rename stage, the rename stage allocates an entry for the destination register in the PRF. When an instruction that has written to a speculative version of a destination register enters a commit stage, the commit stage converts the speculative version into an architectural version. It also deallocates an entry for a previous version of the destination register from the PRF. When a register-window-restore instruction that deallocates a register window enters the commit stage, the commit stage deallocates local and output registers for the deallocated register window from the PRF. | 2015-08-27 |
20150242210 | Monitoring Vector Lane Duty Cycle For Dynamic Optimization - In an embodiment, a processor includes a vector execution unit having a plurality of lanes to execute operations on vector operands, a performance monitor coupled to the vector execution unit to maintain information regarding an activity level of the lanes, and a control logic coupled to the performance monitor to control power consumption of the vector execution unit based at least in part on the activity level of at least some of the lanes. Other embodiments are described and claimed. | 2015-08-27 |
20150242211 | PROGRAMMABLE CONTROLLER - A programmable controller for executing a sequence program comprises a processor for reading and executing an instruction code from an external memory, an instruction cache memory for storing a branch destination program code of a branch instruction included in the sequence program, and a cache controller for entering the branch destination program code in the instruction cache memory according to data on priority, the instruction code of the branch instruction including the data on priority of an entry into the instruction cache memory. | 2015-08-27 |
20150242212 | MODELESS INSTRUCTION EXECUTION WITH 64/32-BIT ADDRESSING - In an aspect, a processor supports modeless execution of 64 bit and 32 bit instructions. A Load/Store Unit (LSU) decodes an instruction that without explicit opcode data indicating whether the instruction is to operate in a 32 or 64 bit memory address space. LSU treats the instruction either as a 32 or 64 bit instruction in dependence on values in an upper 32 bits of one or more 64 bit operands supplied to create an effective address in memory. In an example, a 4 GB space addressed by 32-bit memory space is divided between upper and lower portions of a 64-bit address space, such that a 32-bit instruction is differentiated from a 64-bit instruction in dependence on whether an upper 32 bits of one or more operands is either all binary 1 or all binary 0. Such a processor may support decoding of different arithmetic instructions for 32-bit and 64-bit operations. | 2015-08-27 |
20150242213 | SYSTEM AND METHOD FOR MODIFICATION OF CODED INSTRUCTIONS IN READ-ONLY MEMORY USING ONE-TIME PROGRAMMABLE MEMORY - Various embodiments of methods and systems for flexible read only memory (“ROM”) storage of coded instructions in a portable computing device (“PCD”) are disclosed. Because certain instructions and/or data associated with a primary boot loader (“PBL”) may be defective or in need of modification after manufacture of a mask ROM component, embodiments of flexible ROM storage (“FRS”) systems and methods use a closely coupled one-time programmable (“OTP”) memory component to store modified instructions and/or data. Advantageously, because the OTP memory component may be manufactured “blank” and programmed at a later time, modifications to code and/or data stored in an unchangeable mask ROM may be accomplished via pointers in fuses of a security controller that branch the request to the OTP and bypass the mask ROM. | 2015-08-27 |
20150242214 | DYNAMIC PREDICTION OF HARDWARE TRANSACTION RESOURCE REQUIREMENTS - A transactional memory system dynamically predicts the resource requirements of hardware transactions. A processor of the transactional memory system predicts resource requirements of a first hardware transaction to be executed based on any one of a resource hint and a previous execution of a prior hardware transaction. The processor allocates resources for the first hardware transaction based on the predicted resource requirements. The processor executes the first hardware transaction. The processor saves resource usage information of the first hardware transaction for future prediction. | 2015-08-27 |
20150242215 | PREDICTING THE LENGTH OF A TRANSACTION - In a multi-processor transaction execution environment a transaction is executed a plurality of times. Based on the executions, a duration is predicted for executing the transaction. Based on the predicted duration, a threshold is determined. Pending aborts of the transaction due to memory conflicts are suppressed based on the transaction exceeding the determined threshold. | 2015-08-27 |
20150242216 | COMMITTING HARDWARE TRANSACTIONS THAT ARE ABOUT TO RUN OUT OF RESOURCE - A transactional memory system determines whether a hardware transaction can be salvaged. A processor of the transactional memory system begins execution of a transaction in a transactional memory environment. Based on detection that an amount of available resource for transactional execution is below a predetermined threshold level, the processor determines whether the transaction can be salvaged. Based on determining that the transaction can not be salvaged, the processor aborts the transaction. Based on determining the transaction can be salvaged, the processor performs a salvage operation, wherein the salvage operation comprises one or more of: determining that the transaction can be brought to a stable state without exceeding the amount of available resource for transactional execution, and bringing the transaction to a stable state; and determining that a resource can be made available, and making the resource available. | 2015-08-27 |
20150242217 | SYSTEM AND METHOD TO QUANTIFY DIGITAL DATA SHARING IN A MULTI-THREADED EXECUTION - A method to quantify a plurality of digital data sharing in a multi-threaded execution includes the steps of: providing at least one processor; providing a computer readable non-transitory storage medium including a computer readable multi-threaded executable code and a computer readable executable code to calculate a plurality of shared footprint values and an average shared footprint value; running the multi-threaded executable code on the at least one computer processor; running the computer readable executable code configured to calculate a plurality of shared footprint values and an average shared footprint value; calculating a plurality of shared footprint values by use of a linear-time process for a corresponding plurality of executable windows in time; and calculating and saving an average shared footprint value based on the plurality of shared footprint values to quantify by a metric the data sharing by the multi-threaded execution. A system to perform the method is also described. | 2015-08-27 |
20150242218 | DEFERRAL INSTRUCTION FOR MANAGING TRANSACTIONAL ABORTS IN TRANSACTIONAL MEMORY COMPUTING ENVIRONMENTS - A deferral instruction associated with a transaction is executed in a transaction execution computing environment with transactional memory. Based on executing the deferral instruction, a processor sets a defer-state indicating that pending disruptive events such as interrupts or conflicting memory accesses are to be deferred. A pending disruptive event is deferred based on the set defer-state, and the transaction is completed based on the disruptive event being deferred. The progress of the transaction may be monitored during a deferral period. The length of such deferral period may be specified by the deferral instruction. Whether the deferral period has expired may be determined based on the monitored progress of the transaction. If the deferral period has expired, the transaction may be aborted and the disruptive event may be processed. | 2015-08-27 |
20150242219 | COMPUTER SYSTEM AND CONTROL METHOD - A control method to re-initiate a computer system is provided. The computer system includes a control unit and a storage unit. The storage unit includes a first storage block and a second storage block to store a first initiating code and a second initiating code. The first storage block is different from the second storage block. The control method includes determining by the control unit whether the computer system can be initiated normally. When the computer system can be initiated normally, it enters into a normal operation mode. When the computer system cannot be initiated normally, it enters into an emergency back-up mode, and then the control unit reads the second initiating code on the second storage block to re-initiate the computer system. | 2015-08-27 |
20150242220 | MASSIVE VIRTUAL DESKTOP PROVIDING METHOD AND SYSTEM THEREOF - Exemplary embodiments of the present invention relate to a method and system for providing massive virtual desktops. A system for providing massive virtual desktops according to an embodiment of the present invention comprises a uppermost layer configured to receive a virtual desktop providing request from a client, search an adjacent lower layer involved in a virtual desktop service for the client, and transmit the virtual desktop providing request to the searched adjacent lower layer; and a lowest layer configured to receive the virtual desktop providing request transmitted from an adjacent upper layer, search a virtual desktop to perform a service by analyzing the virtual desktop providing request, and transmit the searched virtual desktop to the client. According to embodiments of the present invention, a virtual desktop can be provided to massive users. | 2015-08-27 |
20150242221 | USING LINKER SCRIPTS FOR LOADING SYSTEM CONFIGURATION TABLES - Systems and methods for using linker scripts for loading system configuration tables. An example method may comprise: packaging, by a host computer system, a first system configuration table and a second system configuration table into one or more memory image files; providing a script comprising a first instruction to load the memory image files into a memory of a virtual machine being executed by the host computer system, the script further comprising a second instruction to resolve, in view of a base address, a reference by the first system configuration table to the second system configuration table; and providing the memory image files and the script to the virtual machine. | 2015-08-27 |
20150242222 | Method and client for using an embedded ActiveX plug-in in a browser - The invention discloses a method and client for using an embedded ActiveX plug-in in a browser. The method comprises: detecting that the browser is to load an ActiveX plug-in; judging whether the ActiveX plug-in has already been installed in a computer system where the browser is currently located; if it is determined that the ActiveX plug-in has already been installed in the computer system, intercepting the loading information about the ActiveX plug-in and loading the ActiveX plug-in embedded in the browser; and if it is determined that the ActiveX plug-in has not been installed in the computer system, generating a specific registry key value related to the embedded ActiveX plug-in, and loading the ActiveX plug-in embedded in the browser according to the specific registry key value. | 2015-08-27 |
20150242223 | In-Process Trapping for Service Substitution in Hosted Applications Executing on Mobile Devices with Multi-Operating System Environment - The invention provides in some aspects a computing device that includes a central processing unit that executes a native operating system including one or more native runtime environments within which native software applications are executing, where each such native software application has instructions for execution under the native operating system. One or more hosted runtime environments execute within the one or more native runtime environments, each of which hosted runtime environments executes hosted software applications that have instructions for execution under a hosted operating system that differs from the native operating system. A first hosted software application executing as a first process of the hosted runtime environments includes an instruction that references a member (hereinafter, “referenced member”) of an object defined by an object-oriented programming (OOP) class (“referenced class”). The process executes that instruction utilizing data and/or code (hereinafter, “substitute member”) other than that specified by the referenced class as the referenced member. As used here, a “member” of an object is any of a method member and a data member. | 2015-08-27 |
20150242224 | DISK RESIZE OF A VIRTUAL MACHINE - An engine in a virtualization system may determine that a disk size of a disk represented by a virtual machine disk image is to be changed. In response, the engine determines whether a host is using the virtual machine disk image to run a virtual machine and also determines a file format of the virtual machine disk image. Based on the determination, the engine sends a request to change the disk size to a requested size to the host running the virtual machine or to a storage pool manager. | 2015-08-27 |
20150242225 | EXECUTION OF A SCRIPT BASED ON PROPERTIES OF A VIRTUAL DEVICE ASSOCIATED WITH A VIRTUAL MACHINE - An event associated with a virtual machine may be identified. Furthermore, a script associated with the event may be identified. A property of a virtual device that is assigned to the virtual machine may be received. A determination may be made to execute the script or not to execute the script for the virtual machine based on the property of the virtual device that is assigned to the virtual machine. | 2015-08-27 |
20150242226 | METHODS AND SYSTEMS FOR CALCULATING COSTS OF VIRTUAL PROCESSING UNITS - This disclosure presents computational systems and methods for calculating the cost of vCPUs from the cost of CPU computing cycles. In one aspect, a total number of computing cycles used by one or more virtual machines (“VMs”) is calculated based on utilization measurements of a multi-core processor for each VM over a period of time. The method also calculates a total number of virtual CPUs (“vCPUs”) used by the one or more VMs based on vCPU counts for each VM over the period of time. A cost per vCPU is calculated based on the total number of computing cycles, the total number of vCPUs, and cost per computing cycle. The cost per vCPU is stored in a data-storage device. The cost per vCPU can be used to calculate the cost of a VM that uses one or more of the vCPUs. | 2015-08-27 |
20150242227 | Dynamic Information Virtualization - A system and method for providing dynamic information virtualization (DIV) is disclosed. According to one embodiment, a device includes a dynamic optimization manager (DOM), a process and memory manager (PMM), a memory, and a host device driver. The device starts virtual functions after booting to allow a virtual machine (VM) running a guest operating system to identify the virtual functions and load virtual drivers of the virtual functions. The PMM allocates a unified cache from the memory to facilitate coherent access to information from storage and network resources by the VM. The host device driver enables a guess process in the VM to access the information stored in the unified cache in a secure and isolated manner. | 2015-08-27 |
20150242228 | Hypervisor-Agnostic Method of Configuring a Virtual Machine - In one embodiment, there is a method for configuring a virtual machine where there are two storage mechanisms available to the virtual machine: a first storage containing virtual machine operating information, and a second storage including virtual machine configuration information. The configuration information in the second storage is used to configure the virtual machine, including changing the information in the operating storage. The configuration information can pertain to the hypervisor, any logical container within the hypervisor, and any operating environment within on of the logical containers. In a further embodiment, the configuration information from the second storage can be saved and provided to another virtual machine, and used to configure the second virtual machine in a similar fashion. Each virtual machine can have an independent copy of the second storage, or the storage can be mounted in the first machine, unmounted, and then mounted in the second machine. | 2015-08-27 |
20150242229 | IDLE PROCESSOR MANAGEMENT BY GUEST IN VIRTUALIZED SYSTEMS - A system and method for idle processor management in virtualized systems are disclosed. In accordance with one embodiment, a guest operating system (OS) of a virtual machine estimates an idle time for a virtual central processing unit (CPU) of the virtual machine, where the virtual machine is executed by a CPU of a host computer system, and where the virtual CPU is mapped to the CPU. The guest OS also estimates a host latency time for the host computer system, where the host latency time is based on at least one of: a first power state of the CPU, a context switch associated with execution of the virtual machine by the CPU, or an idle state of a hypervisor executed by the CPU. When the idle time for the virtual CPU divided by a performance multiplier exceeds the host latency time, the virtual CPU is caused to halt. | 2015-08-27 |
20150242230 | HYPERVISOR CAPABILITY ACCESS PROVISION - An apparatus receives virtualization manager indication of a capability selected from a virtualization manager capability subset. The apparatus receives non-virtualization manager indication of a selected capability not in said subset. The apparatus passes virtualization manager indication of a result of the capability selected from the subset. The apparatus passes non-virtualization manager indication of a result of the capability not in said subset. | 2015-08-27 |
20150242231 | DATA SWAP IN VIRTUAL MACHINE ENVIRONMENT - The present invention provides a method and apparatus for data swap in a virtual machine environment. The present invention provides a method for data swap in a virtual machine environment, including: in response to a swap request from a virtual machine, looking up storage space associated with the swap request; and allocating to the virtual machine free physical storage space, in a host, which matches the storage space, so that the free physical storage space logically becomes available storage space to the virtual machine; the virtual machine is running on the host, and the storage space is physical storage space in the host. | 2015-08-27 |
20150242232 | RESUMING A PAUSED VIRTUAL MACHINE - A host in a virtualization system pings one or more storage domains. When the host determines that a storage domain in inaccessible and later determines that the storage domain is once again accessible, the host may determine a set of virtual machines associated with the storage domain that are paused. The host may, then, resume at least one of those virtual machines. | 2015-08-27 |
20150242233 | SAFETY HYPERVISOR FUNCTION - The disclosure relates to systems and methods for defining a processor safety privilege level for controlling a distributed memory access protection system. More specifically, a safety hypervisor function for accessing a bus in a computer processing system includes a module, such as a Computer Processing Unit (CPU) or a Direct Memory Access (DMY) for accessing a system memory and a memory unit for storing a safety code, such as a Processor Status Word (PSW) or a configuration register (DMA (REG)). The module allocates the safety code to a processing transaction and the safety code is visible upon access of the bus by the module. | 2015-08-27 |
20150242234 | Realtime Optimization Of Compute Infrastructure In A Virtualized Environment - A system and method is provided to dynamically optimize the topography of a compute cluster in real time based on the runtime configuration, specified as metadata, of jobs in a queuing or scheduling environment. The system provisions or terminates multiple types of virtualized resources based on the profile of all applications of the jobs in a current queue based on their aggregate runtime configuration specified as requirements and rank expressions within associated metadata. The system will continually request and terminate compute resources as jobs enter and exit the queue, keeping the cluster as minimal as possible while still being optimized to complete all jobs in the queue, optimized for cost, runtime or other metric. End users can specify the runtime requirements of jobs, thereby preventing the user from having to know about the physical profile of the compute cluster to specifically architect their jobs. | 2015-08-27 |
20150242235 | FABRIC DISTRIBUTED RESOURCE SCHEDULING - Embodiments perform centralized input/output (I/O) path selection for hosts accessing storage devices in distributed resource sharing environments. The path selection accommodates loads along the paths through the fabric and at the storage devices. Topology changes may also be identified and automatically initiated. Some embodiments contemplate the hosts executing a plurality of virtual machines (VMs) accessing logical unit numbers (LUNs) in a storage area network (SAN). | 2015-08-27 |
20150242236 | OPERATION VERIFICATION DEVICE FOR VIRTUAL APPARATUS, AND OPERATION VERIFICATION SYSTEM AND PROGRAM FOR VIRTUAL APPARATUS - An operation verification device for a virtual apparatus which confirms a state of operation of the virtual apparatus, comprising a configuration information transmission unit which transmits configuration information on the virtual apparatus to an operation confirmation device that performs a communication for confirming the state of operation of a virtual apparatus an operation confirmation instruction unit which instructs the operation confirmation device to perform a communication for confirming the state of operation on a basis of the configuration information while a connection with the operation confirmation device is disconnected, and after issuing the instruction, disconnects the connection with the operation confirmation device and a confirmation result collection unit which restarts a connection with the operation confirmation device while the connection between the operation confirmation device and the virtual apparatus is disconnected, and receives a result of the confirmation of the state of operation. | 2015-08-27 |
20150242237 | RELEASE LIFECYCLE MANAGEMENT SYSTEM FOR MULTI-NODE APPLICATION - A deployment system provides the ability to deploy a multi-node distributed application, such as a cloud computing platform application that has a plurality of interconnected nodes performing specialized jobs. The deployment system may update a currently running cloud computing platform application according to a deployment manifest and a versioned release bundle that includes jobs and application packages. The deployment system determines changes to the currently running cloud computing platform application and distributes changes to each job to deployment agents executing on VMs. The deployment agents apply the updated jobs to their respective VMs (e.g., launching applications), thereby deploying an updated version of cloud computing platform application. | 2015-08-27 |
20150242238 | USING THE TRANSACTION-BEGIN INSTRUCTION TO MANAGE TRANSACTIONAL ABORTS IN TRANSACTIONAL MEMORY COMPUTING ENVIRONMENTS - When executed, a transaction-begin instruction specifies an initial value for a transaction-count-to-completion (CTC) value for a transaction. The initial value indicates a predicted duration of the transaction. The CTC value may be a number of instructions to completion or an amount of time to completion. The CTC value is adjusted as the transaction progresses. The adjusted CTC value indicates how far the transaction is from completion. When a disruptive event associated with inducing transactional aborts, such as an interrupt or a conflicting memory access, is identified while processing the transaction, processing of the disruptive event is deferred if the adjusted CTC value satisfies deferral criteria. If the adjusted CTC value does not satisfy deferral criteria, the transaction is aborted and the disruptive event is processed. | 2015-08-27 |
20150242239 | SYSTEM AND METHOD FOR INTELLIGENT TIMER SERVICES - A method is provided for efficiently scheduling timer events within an operating system by allocating a plurality of timers, each of which has an expiry time, to a set of available timer slots. The method defines a timer spread value that denotes the allowed variance of the expiry times of each of the timers, calculates a set of available timer slots for each of the timers based on the timer spread value, and adjusts the expiry times of the timers so as to insert and evenly spread the timers across the set of available timer slots. In one implementation, the set of available timer slots is located in a timer wheel existing within the operating system, and the timer wheel uses a plurality of timer vectors arranged into successively increasing levels, beginning with level zero. | 2015-08-27 |
20150242240 | BALANCED PROCESSING USING HETEROGENEOUS CORES - Technologies are generally described for a multi-processor core and a method for transferring threads in a multi-processor core. In an example, a multi-core processor may include a first group including a first core and a second core. A first sum of the operating frequencies of the cores in the first group corresponds to a first total operating frequency. The multi-core processor may further include a second group including a third core. A second sum of the operating frequencies of the cores in the second group may correspond to a second total operating frequency that is substantially the same as the first total operating frequency. A hardware controller may be configured in communication with the first, second and third core. A memory may be configured in communication with the hardware controller and may include an indication of at least the first group and the second group. | 2015-08-27 |
20150242241 | METHOD FOR MANAGING THE THREADS OF EXECUTION IN A COMPUTER UNIT, AND COMPUTER UNIT CONFIGURED TO IMPLEMENT SAID METHOD - A method of managing execution threads launched by processes being executed in a computer unit having at least one calculation core connected to a shared memory. The method includes the steps of:
| 2015-08-27 |
20150242242 | ROUTING JOB SUBMISSIONS BETWEEN DISPARATE COMPUTE ENVIRONMENTS - A system and method are provided for directing a workload between distributed computing environments. Performance and use data from each of a plurality of computer clusters is monitored on a periodic or continuous basis. The plurality of computers can include a first subset being in a first region and a second subset being in a second region. Each region has known performance characteristics, zone of performance and zone of reliability which is used in distributing a workload or job. A job is received at the system, wherein the system determines a routing for the job to a distributed computing environment, wherein the routing is in response to the obtained performance and use data, and the region encompassing the given computer cluster. | 2015-08-27 |
20150242243 | PROCESSOR POWER OPTIMIZATION WITH RESPONSE TIME ASSURANCE - A method for managing processor power optimization is provided. The method may include receiving a plurality of tasks for processing by a processor environment. The method may also include allocating a portion of a compute resource corresponding to the processor environment to each of the received plurality of tasks, the allocating of the portion being based on both an execution time and a response time associated with each of the received plurality of tasks. | 2015-08-27 |
20150242244 | System and Method For a Workload Management and Scheduling Module to Manage Access to a Compute Environment According to Local and Non-Local User Identity Information - A system, method and computer-readable media for managing a compute environment are disclosed. The method includes importing identity information from an identity manager into a module performs workload management and scheduling for a compute environment and, unless a conflict exists, modifying the behavior of the workload management and scheduling module to incorporate the imported identity information such that access to and use of the compute environment occurs according to the imported identity information. The compute environment may be a cluster or a grid wherein multiple compute environments communicate with multiple identity managers. | 2015-08-27 |
20150242245 | METHOD FOR MANAGING WORKLOADS IN A MULTIPROCESSING COMPUTER SYSTEM - A method for managing workloads in a multiprocessing computer system is disclosed. Initially, a set of affinity domains is defined for a group of processor cores, wherein each of the affinity domains includes a subset of the processor cores. An affinity measure is defined to indicate that a given workload should be moved to a smaller affinity domain having fewer processor cores. A performance measure is defined to indicate the performance of a given workload. A given workload is determined based on the affinity measure and the performance measure. In response to a determination that a given workload should be moved to a smaller affinity domain based on the affinity measure, the given workload is moved to a smaller affinity domain. In response to a determination that there is a reduction in performance based on the performance measure, the given workload is moved to a larger affinity domain. | 2015-08-27 |
20150242246 | ADAPTIVE PROCESS FOR DATA SHARING WITH SELECTION OF LOCK ELISION AND LOCKING - In a Hardware Lock Elision (HLE) Environment, predictively determining whether a HLE transaction should actually acquire a lock and execute non-transactionally, is provided. Included is, based on encountering an HLE lock-acquire instruction, determining, based on an HLE predictor, whether to elide the lock and proceed as an HLE transaction or to acquire the lock and proceed as a non-transaction; based on the HLE predictor predicting to elide, setting the address of the lock as a read-set of the transaction, and suppressing any write by the lock-acquire instruction to the lock and proceeding in HLE transactional execution mode until an xrelease instruction is encountered wherein the xrelease instruction releases the lock or the HLE transaction encounters a transactional conflict; and based on the HLE predictor predicting not-to-elide, treating the HLE lock-acquire instruction as a non-HLE lock-acquire instruction, and proceeding in non-transactional mode. | 2015-08-27 |
20150242247 | SHARED RESOURCE UPDATING - Method and system are provided for updating data at a shared resource in a concurrent user environment. The method includes: a first client application carrying out the steps of pulling data from a shared resource for update wherein the data includes a timestamp of a last update; requesting a lock on the data only allowing updates from the first client for a set period of time; working on the data whether or not a lock is in place for the first client application. Wherein when a first client application applies to update the data, a check is carried out to compare the timestamp of the data updated by the first client application with the current timestamp of the data, and if these do not match, the update fails. | 2015-08-27 |
20150242248 | ALERTING HARDWARE TRANSACTIONS THAT ARE ABOUT TO RUN OUT OF SPACE - A transactional memory system determines whether to pass control of a transaction to an about-to-run-out-of-resource handler. A processor of the transactional memory system determines information about an about-to-run-out-of-resource handler for transaction execution of a code region of a hardware transaction. The processor dynamically monitors an amount of available resource for the currently running code region of the hardware transaction. The processor detects that the amount of available resource for transactional execution of the hardware transaction is below a predetermined threshold level. The processor, based on the detecting, saves speculative state information of the hardware transaction, and executes the about-to-run-out-of-resource handler, the about-to-run-out-of-resource handler determining whether the hardware transaction is to be aborted or salvaged. | 2015-08-27 |
20150242249 | SALVAGING LOCK ELISION TRANSACTIONS - A transactional memory system salvages hardware lock elision (HLE) transactions. A computer system of the transactional memory system records information about locks elided to begin HLE transactional execution of first and second transactional code regions. The computer system detects a pending cache line conflict of a cache line, and based on the detecting stops execution of the first code region of the first transaction and the second code region of the second transaction. The computer system determines that the first lock and the second lock are different locks and uses the recorded information about locks elided to acquire the first lock of the first transaction and the second lock of the second transaction. The computer system commits speculative state of the first transaction and the second transaction and the computer system continues execution of the first code region and the second code region non-transactionally. | 2015-08-27 |
20150242250 | MANAGING SPECULATIVE MEMORY ACCESS REQUESTS IN THE PRESENCE OF TRANSACTIONAL STORAGE ACCESSES - In at least some embodiments, a cache memory of a data processing system receives a speculative memory access request including a target address of data speculatively requested for a processor core. In response to receipt of the speculative memory access request, transactional memory logic determines whether or not the target address of the speculative memory access request hits a store footprint of a memory transaction. In response to determining that the target address of the speculative memory access request hits a store footprint of a memory transaction, the transactional memory logic causes the cache memory to reject servicing the speculative memory access request. | 2015-08-27 |
20150242251 | MANAGING SPECULATIVE MEMORY ACCESS REQUESTS IN THE PRESENCE OF TRANSACTIONAL STORAGE ACCESSES - In at least some embodiments, a cache memory of a data processing system receives a speculative memory access request including a target address of data speculatively requested for a processor core. In response to receipt of the speculative memory access request, transactional memory logic determines whether or not the target address of the speculative memory access request hits a store footprint of a memory transaction. In response to determining that the target address of the speculative memory access request hits a store footprint of a memory transaction, the transactional memory logic causes the cache memory to reject servicing the speculative memory access request. | 2015-08-27 |
20150242252 | OPERATION UNIT-EQUIPPED DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT - An operation unit-equipped device includes a generator that generates an application module including a processing executor that executes a processing, a display part that performs a display appropriated to the processing executor, and a controller that controls the processing executor and the display part; and a discard requester that requests, when at least one new processing executor and one new display part are generated after the generation of the processing executor and the display part, a discard of the processing executor and the display part generated previously. When there is an active processing in the processing executor generated previously when the request for discard is made, the controller discards the display part generated previously and maintains the processing executor generated previously until the active processing is completed, and discards the processing executor generated previously when the active processing in the processing executor generated previously is completed. | 2015-08-27 |
20150242253 | EVENT PROCESSING CONTROL DEVICE, NODE DEVICE, EVENT PROCESSING SYSTEM, AND EVENT PROCESSING CONTROL METHOD - In an event processing system, a service level required for event processing is satisfied. A node information acquisition unit | 2015-08-27 |
20150242254 | METHOD AND APPARATUS FOR PROCESSING MESSAGE BETWEEN PROCESSORS - A message processing apparatus that processes a message between processors according to an embodiment of the present invention solves a problem that occurs when a message is processed by using interrupt or polling processing by processing messages having priorities that are transmitted between transmission and receiving processors that use a shared memory by using a polling thread and a kernel module, thereby providing a priority-based message processing method without applying a load to a system. | 2015-08-27 |
20150242255 | COMPUTER ARCHITECTURE AND PROCESS FOR APPLICATION PROCESSING ENGINE - An application processing engine computer system is configured to process an application for at least one of a product and service using a plurality of coordinated, configurable services. The application processing engine includes an application data management service, an application process flow management service, a decisioning service, an application processing host service, an application activity monitoring service, a queue management service and/or a system maintenance service. Various embodiments are described, including a computer implemented method for processing an application using an application processing engine component and/or module. | 2015-08-27 |
20150242256 | TECHNIQUES TO FACILITATE COMMUNICATION ACROSS DOMAINS - Techniques to facilitate communication across domains are described. An apparatus may comprise a logic circuit, and a user interface component operative on the logic circuit to present to enable cross-origin communication between applications in different domains by processing a message directed towards an application to verify an origin of the message. If the origin is trusted, user interface component transforms the message into a request indicating a trusted domain and communicates the request to the application. The application performs various tasks in accordance with the request and returns a response to the user interface component. Other embodiments are described and claimed. | 2015-08-27 |
20150242257 | TERMINAL DEVICE AND DATA PASSING METHOD - A terminal device passes data between programs. The terminal device includes a data passing unit configured to receive the data from a first program and pass the received data to a second program selected from among one or more second programs associated with a type of the received data; the first program configured to convert the type of the data, which is a passing target, from an original type into a unique type, and then pass the data to the data passing unit; and the second program configured to convert the type of the data received from the data passing unit into the original type from the unique type, the second program being associated with the unique type. | 2015-08-27 |
20150242258 | ROUTING MESSAGES BETWEEN APPLICATIONS - A system and method for enabling the interchange of enterprise data through an open platform is disclosed. This open platform can be based on a standardized interface that enables parties to easily connect to and use the network. Services operating as senders, recipients, and in-transit parties can therefore leverage a framework that overlays a public network. | 2015-08-27 |
20150242259 | MESSAGE COMMUNICATION OF SENSOR AND OTHER DATA - A service may be provided that reads sensors, and that communicates information based on the sensor readings to applications. In one example, an operating system provides a sensor interface that allows programs that run on a machine to read the values of sensors (such as an accelerometer, light meter, etc.). A service may use the interface to read the value of sensors, and may receive subscriptions to sensor values from other programs. The service may then generate messages that contain the sensor value, and may provide these messages to programs that have subscribed to the messages. The messages may contain raw sensor data. Or, the messages may contain information that is derived from the sensor data and/or from other data. | 2015-08-27 |
20150242260 | PREDICTING DEGRADATION OF A COMMUNICATION CHANNEL BELOW A THRESHOLD BASED ON DATA TRANSMISSION ERRORS - Applicants have discovered that error detection techniques, such as Forward Error Correction techniques, may be used to predict the degradation below a certain threshold of an ability to accurately convey information on a communication channel, for example, to predict a failure of the communication channel. In response, transmission and/or reception of information on the channel may be adapted, for example, to prevent the degradation below the threshold, e.g., prevent channel failure. Predicting the degradation may be based, at least in part, on data transmission error information corresponding to one or more blocks of information received on the channel and may include determining an error rate pattern over time. Based on these determinations, the degradation below the threshold may be predicted and the transmission and/or reception adapted. Adapting may include initiating use of a different error encoding scheme and/or using an additional communication channel to convey information. | 2015-08-27 |
20150242261 | COMMUNICATION DEVICE, ROUTER HAVING COMMUNICATION DEVICE, BUS SYSTEM, AND CIRCUIT BOARD OF SEMICONDUCTOR CIRCUIT HAVING BUS SYSTEM - A communication device includes: a receiving terminal; a storage device which stores a rule in which a condition regarding a bus system operation environment and an error tolerance scheme are associated with each other, and information regarding a path length; an error processor which determines the error tolerance scheme by utilizing the condition regarding the bus system operation environment and the rule so as to generate error tolerance information corresponding to the received data according to the determined error tolerance scheme; and a sending terminal for sending at least one packet including the error tolerance information and the data to the bus. The operation environment-related condition is a condition for granting an error tolerance for a transmission path of which a bus path length to another communication device, which is a destination of the data, is greater than a predetermined value. | 2015-08-27 |
20150242262 | SERVICE METRIC ANALYSIS FROM STRUCTURED LOGGING SCHEMA OF USAGE DATA - Technologies are generally described to provide a passive monitoring system employing a logging schema to track usage data in order to analyze performance and reliability of a service. The logging schema may be configured to track user requests as each request is received and processed at individual subsystems of the collaborative service. A logging entry may be created at a data store of the service, where the logging entry includes a subsystem name, an operation performed by the subsystem to fulfill the request, and start and end times of the operation. The logging schema may also detect errors fulfilling the requests, and may classify detected errors into a bucket, where each bucket denotes a failure scenario. Reliability of the service may be calculated based on analysis of the buckets to compute error rates. Reports may be generated to enable continuous monitoring of a performance and reliability of the system. | 2015-08-27 |
20150242263 | DATAFLOW ALERTS FOR AN INFORMATION MANAGEMENT SYSTEM - Disclosed herein are systems and methods for managing information management operations. The system may be configured to employ a work flow queue to reduce network traffic and manage server processing resources. The system may also be configured to forecast or estimate information management operations based on estimations of throughput between computing devices scheduled to execute one or more jobs. The system may also be configured to escalate or automatically reassign notification of system alerts based on the availability of system alert recipients. Various other embodiments are also disclosed herein. | 2015-08-27 |
20150242264 | AUTOMATIC ALERT ESCALATION FOR AN INFORMATION MANAGEMENT SYSTEM - Disclosed herein are systems and methods for managing information management operations. The system may be configured to employ a work flow queue to reduce network traffic and manage server processing resources. The system may also be configured to forecast or estimate information management operations based on estimations of throughput between computing devices scheduled to execute one or more jobs. The system may also be configured to escalate or automatically reassign notification of system alerts based on the availability of system alert recipients. Various other embodiments are also disclosed herein. | 2015-08-27 |
20150242265 | CHANGE MESSAGE BROADCAST ERROR DETECTION - A hardware device detects change messages broadcast within a system. The system includes the hardware device, one or more controller devices, one or more expander devices, and one or more target devices interconnected among one another. The hardware device determines whether the change messages were broadcast within the system every first period of time or less for at least a second period of time, the first period of time less than the second period of time. In response to determining that the change messages were broadcast within the system every first period of time or less for at least the second period of time, the hardware devices signals that an error has been detected. | 2015-08-27 |
20150242266 | INFORMATION PROCESSING APPARATUS, CONTROLLER, AND METHOD FOR COLLECTING LOG DATA - A controller includes: a monitor that monitors an occurrence of a failure in a processor; an information obtainer that obtains, when the monitor detects the occurrence of the failure, log data from the device; and a first storing processor that stores the log data obtained by the information obtainer into a first storing device. | 2015-08-27 |
20150242267 | DETECTION AND RESTORATION OF ERRONEOUS DATA - Embodiments of the present invention provide systems, methods, and computer storage media for detecting and restoring erroneous data. In cases that a data entry within a data matrix is determined to be erroneous, the data entry can be restored using a replacement value calculated in accordance with other data from the data matrix. In particular, the number of dimensions used to calculate the replacement value can be reduced from the complete set of dimensions to avoid unnecessary noise data that may impact corrected data values. | 2015-08-27 |
20150242268 | PERIODICALLY UPDATING A LOG LIKELIHOOD RATIO (LLR) TABLE IN A FLASH MEMORY CONTROLLER - Log likelihood ration (LLR) values that are computed in a flash memory controller during read retries change over time as the number of program-and-erase cycles (PECs) that the flash memory die has been subjected to increases. Therefore, in cases where an LLR table is used to provide pre-defined, fixed LLR values to the error-correcting code (ECC) decoding logic of the controller, decoding success and the resulting BER will degrade over time as the number of PECs to which the die has been subjected increases. In accordance with embodiments, a storage system, a flash memory controller for use in the storage system and method are provided that periodically measure the LLR values and update the LLR table with new LLR values. Periodically measuring the LLR values and updating the LLR table with new LLR values ensures high decoding success and a low BER over the life of the flash memory die. | 2015-08-27 |
20150242269 | Memory Redundancy to Replace Addresses with Multiple Errors - A method and apparatus are provided for error correction of a memory by using a first memory ( | 2015-08-27 |
20150242270 | DATA STORAGE DEVICE AND METHOD TO CORRECT BIT VALUES USING MULTIPLE READ VOLTAGES - A data storage device includes a memory including a group of storage elements. The memory is configured to read the group of the storage elements. A controller is coupled to the memory. The controller is configured to, in response to a first error correction code (ECC) procedure determining that a first plurality of bit values obtained using a first read voltage to read the group of storage elements is uncorrectable, instruct the memory to read the group of the storage elements using a second read voltage to obtain a second plurality of bit values. The controller is further configured to compare the first plurality of bit values with the second plurality of bit values to identify a first set of bits having different values in the first plurality of bit values as compared to the second plurality of bit values and to change one or more values of the first plurality of bit values for one or more bits in the first set of bits to generate a first plurality of corrected bit values. | 2015-08-27 |
20150242271 | MEMORY MANAGEMENT SYSTEM AND METHOD - A memory system and method of operating the same is described, where the memory system is used to store data in a RAIDed manner. The stored data may be retrieved, including the parity data so that the stored data is recovered when the first of either the stored data without the parity data, or the stored data from all but one memory module and the parity data, has been received. The writing of data, for low write data loads, is managed such that only one of the memory modules of a RAID stripe is being written to, or erased, during a time interval. | 2015-08-27 |
20150242272 | CONCATENATING DATA OBJECTS FOR STORAGE IN A DISPERSED STORAGE NETWORK - A method begins by a processing module of a dispersed storage network (DSN) concatenating a plurality of independent data objects into a concatenated data object and performing a dispersed storage error encoding function on the concatenated data object to produce a set of data-based encoded data slices and a set of redundancy-based encoded data slices. The method continues with the processing module outputting the set of data-based encoded data slices to a first set of storage units for storage and outputting the set of redundancy-based encoded data slices to a second set of storage units for storage. | 2015-08-27 |
20150242273 | STORAGE OF DATA WITH VERIFICATION IN A DISPERSED STORAGE NETWORK - A method begins by a computing device sending a set of redundant dispersed storage error encoding write requests regarding a data object to a set of dispersed storage (DS) processing modules. The method continues with the set of DS processing modules dispersed storage error encoding the data object to produce a group of pluralities of sets of encoded data slices. The method continues with a set of storage units temporarily storing the group of pluralities of sets of encoded data slices. The method continues with the set of storage units permanently storing encoded data slices of the group of pluralities of sets of encoded data slices based on successful execution of a storage verification process to produce a plurality of sets of encoded data slices. | 2015-08-27 |
20150242274 | PIPELINED ECC-PROTECTED MEMORY ACCESS - In one aspect, a pipelined ECC-protected cache access method and apparatus provides that during a normal operating mode, for a given cache transaction, a tag comparison action and a data RAM read are performed speculatively in a time during which an ECC calculation occurs. If a correctable error occurs, the tag comparison action and data RAM are repeated and an error mode is entered. Subsequent transactions are processed by performing the ECC calculation, without concurrent speculative actions, and a tag comparison and read are performed using only the tag data available after the ECC calculation. A reset to normal mode is effected by detecting a gap between transactions that is sufficient to avoid a conflict for use of tag comparison circuitry for an earlier transaction having a repeated tag comparison and a later transaction having a speculative tag comparison. | 2015-08-27 |
20150242275 | POWER EFFICIENT DISTRIBUTION AND EXECUTION OF TASKS UPON HARDWARE FAULT WITH MULTIPLE PROCESSORS - Tasks may be scheduled on more than one processor to allow the processors to operate at lower processor frequencies and processor supply voltages. Multiple processors executing tasks in parallel at lower frequencies and supply voltages may allow completion of the tasks by deadlines at lower power consumption than a single processor executing all tasks at high frequencies and supply voltages. Power efficiency of a computer system may be improved by using a combination of processors executing tasks using a combination of earliest deadline first (EDF), earliest deadline last (EDL), and round robin (RR) queue management methods. | 2015-08-27 |
20150242276 | SALVAGING LOCK ELISION TRANSACTIONS - A transactional memory system salvages a hardware lock elision (HLE) transaction. A processor of the transactional memory system executes a lock-acquire instruction in an HLE environment and records information about a lock elided to begin HLE transactional execution of a code region. The processor detects a pending point of failure in the code region during the HLE transactional execution. The processor stops HLE transactional execution at the point of failure in the code region. The processor acquires the lock using the information, and based on acquiring the lock, commits the speculative state of the stopped HLE transactional execution. The processor starts non-transactional execution at the point of failure in the code region. | 2015-08-27 |
20150242277 | SALVAGING HARDWARE TRANSACTIONS WITH INSTRUCTIONS - A transactional memory system salvages a hardware transaction. A processor of the transactional memory system records information about an about-to-fail handler for transactional execution of a code region, and records information about a lock elided to begin transactional execution of the code region. The processor detects a pending point of failure in the code region during the transactional execution, and based on the detecting, stops transactional execution at a first instruction in the code region and executes the about-to-fail handler using the information about the about-to-fail handler. The processor, executing the about-to-fail handler, acquires the lock using the information about the lock, commits speculative state of the stopped transactional execution, and starts non-transactional execution at a second instruction following the first instruction in the code region. | 2015-08-27 |
20150242278 | SALVAGING HARDWARE TRANSACTIONS - A transactional memory system salvages a partially executed hardware transaction. A processor of the transactional memory system saves state information in a first code region of a first hardware transaction, the state information useable to determine whether the first hardware transaction is to be salvaged or to be aborted. The processor detects an about to fail condition in the first code region of the first hardware transaction. The processor, based on the detecting, executes an about-to-fail handler, the about-to-fail handler using the saved state information to determine whether the first hardware transaction is to be salvaged or to be aborted. The processor executing the about-to-fail handler, based on the transaction being to be salvaged, uses the saved state information to determine what portion of the first hardware transaction to salvage. | 2015-08-27 |
20150242279 | SALVAGING HARDWARE TRANSACTIONS WITH INSTRUCTIONS - A transactional memory system salvages a hardware transaction. A processor of the transactional memory system executes a salvage indicator instruction, such execution including obtaining a salvage indication information specified by the salvage indicator instruction, and saving the salvage indication information comprising a salvage indication. Based on a pending point of failure being detected, the processor uses the saved salvage indication information to avoid aborting a hardware transaction, wherein absent salvage indication information, the pending point of failure causes a hardware transaction to abort. The processor detects the point of failure, and based on the detecting, determines whether the salvage indication has been recorded. Based on determining that the salvage indication has been recorded, the processor executes an about-to-fail handler, and based on determining that the salvage indication has not been recorded, the processor aborts the transactional execution of the code region. | 2015-08-27 |
20150242280 | SALVAGING HARDWARE TRANSACTIONS WITH INSTRUCTIONS - A transactional memory system salvages a hardware transaction. A processor of the transactional memory system executes a first salvage checkpoint instruction in a code region during transactional execution of the code region, and based on the executing the first salvage checkpoint instruction, the processor records transaction state information comprising an address of the first salvage checkpoint instruction within the code region. The processor detects a pending point of failure in the code region during the transactional execution, and based on the detecting, determines that the transaction state information been recorded, and further based on the detecting, executes an about-to-fail handler. Based on executing the about-to-fail handler, the processor returns to the execution of the code region of the transaction at the address of the checkpoint instruction. | 2015-08-27 |
20150242281 | SELF-AWARE AND SELF-HEALING COMPUTING SYSTEM - A method and a computing system for performing the method. At least two microstates of at least two components of a computing system are organized into at least two macrostates of the computing system. Each microstate represents a state that a component of the computing system is able to individually enter. Each macrostate represents a state that the computing system is able to enter as a whole. The macrostates are organized into attractors. Each attractor is a stable state in which the computing system is stable. An attractor separation map is constructed. The attractor separation map indicates how the attractors are separated from one another by at least two hamming distances. Each hamming distance is a number of bits that differ between two attractors. | 2015-08-27 |
20150242282 | MECHANISM TO UPDATE SOFTWARE PACKAGES - A system and a method are disclosed, including in response to a request to upgrade software stored in a file system, creating, by a processing device, a first snapshot of a file system, responsive to receiving a rollback request, creating a second snapshot of the file system, and rolling back the file system using the first snapshot. | 2015-08-27 |
20150242283 | BACKING UP VIRTUAL MACHINES - A processing device generates a live snapshot of a virtual disk image attached to a virtual machine, wherein generating the live snapshot comprises converting an existing read-write volume to a read-only volume. The processing device generates, from the read-only volume, a temporary snapshot of the virtual disk image, the temporary snapshot comprising a temporary read-write volume. The processing device attaches the temporary snapshot of the virtual disk image to a backup component and causes at least one of the backup component or a backup service to backup the virtual disk image from the attached temporary snapshot. | 2015-08-27 |
20150242284 | TWO-ALGORITHM SORT DURING BACKUP AND RECOVERY - A backup of a file system is performed by scanning a file system to find elements that require a backup. Once at least one element is found, element identifiers associated with the elements are sorted using a first sorting algorithm to select an element for backup, and the element identifier associated with the selected element is appended to a backup list. A second sorting algorithm may also sort in parallel to the first sorting algorithm. The sorted elements are appended to the backup list until a predetermined rule is satisfied, when the remainder of the elements are sorted using a second sorting algorithm different from the first sorting algorithm. The element identifiers associated with the remaining elements are appended to the backup list in an order determined by the second sorting algorithm. While the sorting is occurring, the elements are backed up in the order of the backup list. | 2015-08-27 |
20150242285 | PERSISTENCY FREE ARCHITECTURE - System and method for persistency free management of media storage including, during routine operation: continuously receiving streams of data; storing the streams of data in corresponding files in a non-volatile storage; including in the files a tag indicating whether the file is categorized as active or inactive; and when recovering from a crash: generating a list of active files by scanning the files and identifying active files. System and method for recovering after controller crash may include: during routine operation: continuously handling by the controller processes related to media metadata, by sending commands to a controlled device; sending state parameters related to the processes to the controlled device; and when recovering from the crash: retrieving the state parameters from the controlled device. | 2015-08-27 |
20150242286 | GRAPHICAL INTERFACE FOR DISPLAY OF ASSETS IN AN ASSET MANAGEMENT SYSTEM - The claimed subject matter provides a system and/or method that facilitates employing a graphical user interface to monitor and/or manage an asset within an industrial environment. A graphical user interface can facilitate asset management including a first field that provides a user with a hierarchical representation of assets within an industrial environment. The graphical user interface can further include a second field that displays available management functionality associated with an asset selected within the first field. | 2015-08-27 |
20150242287 | OPTIMIZING DISASTER RECOVERY SYSTEMS DURING TAKEOVER OPERATIONS - Exemplary method, system, and computer program product embodiments for optimizing disaster recovery systems during takeover operations are provided. In one embodiment, by way of example only, a flag is set in a replication grid manager to identify replication grid members to consult in a reconciliation process for resolving intersecting and non-intersecting data amongst the disaster recovery systems for a takeover operation. Additional system and computer program product embodiments are disclosed and provide related advantages. | 2015-08-27 |
20150242288 | SETTING COPY PERMISSIONS FOR TARGET DATA IN A COPY RELATIONSHIP - Providing a computer program product, system, and method for setting copy permissions for target data in a copy relationship. Source data is copied from a first storage to a first data copy in a second storage. A request is received to copy requested data from the first data copy to a second data copy. The second copy operation is performed to copy the requested first data copy form the second storage to a second data copy in response to determining that the requested first data copy is not in the state that does not permit the copying. The request is denied in response to determining that the requested first data copy is in the state that does not permit copying. | 2015-08-27 |
20150242289 | STORAGE SYSTEM AND DATA MANAGEMENT METHOD - An efficient disaster recovery system is constructed at three data centers. A data center includes: a business server for executing an application in response to an input/output request; a storage system for providing a first storage area storing data in response to a request from the business server; and a management server for managing a second data center or a third data center among the plurality of data centers as a failover location when a system of a first data center having the first storage area stops; and wherein the management server: copies all pieces of data stored in the first storage area to a second storage area managed by a storage system of the second data center; and copies part of the data stored in the first storage area to a third storage area managed by a storage system of the third data center. | 2015-08-27 |
20150242290 | COMPUTER SWITCHING METHOD, COMPUTER SYSTEM, AND MANAGEMENT COMPUTER - A computer switching method to be performed by a computer system including a plurality of computers, a storage system, and a management computer, the plurality of computers including: a plurality of first computers and a plurality of second computers, the storage system providing a logical storage device to each of the plurality of first computers, the logical storage device including a first logical storage device which is a storage area for storing data, the computer switching method including: a step of transmitting, by the management computer, a generation request for instructing the storage system to generate a second logical storage device; a step of generating, by the management computer, change information for mapping the first logical storage device to the second logical storage device for the second computer, and transmitting a change request including the generated change information to the storage system. | 2015-08-27 |
20150242291 | STORAGE SYSTEM AND A METHOD USED BY THE STORAGE SYSTEM - Performing failover processing between a production host and a backup host, a storage system is connected to the production host and the backup host. In response to a failure of the production host, metadata is obtained of data blocks that have been cached from an elastic space located in a fast disk of the storage system. A storage capacity of the elastic space is expanded. Data blocks are obtained to which the metadata corresponds according to the metadata and the storage capacity of the expanded elastic space, and storing the same in the expanded elastic space. In response the backup host requesting the data blocks to which the metadata corresponds, and the data blocks to which the metadata corresponds have already been stored in the expanded elastic space, data blocks are obtained to which the metadata corresponds from the expanded elastic space and transmitting the same to the backup host. | 2015-08-27 |
20150242292 | SERVER CLUSTERING IN A COMPUTING-ON-DEMAND SYSTEM - A device may provision two or more servers, each of the servers including a network interface. In addition, the device may enable the network interface in each of the provisioned servers, create a shared volume, assign the shared volume to each of the provisioned servers, and enable a clustering application on each of the provisioned servers to form a cluster comprising the provisioned servers, the cluster having a heartbeat via the network interfaces. | 2015-08-27 |
20150242293 | REAL TIME TERMINAL FOR DEBUGGING EMBEDDED COMPUTING SYSTEMS - One or more circular debug buffers can allow terminal output data to be provided from the target system to a host without halting the target system or causing significant delays. One or more circular debug buffers may also allow input (such as keyboard input) to be provided from the host to the target without halting the target system or causing significant delays. Accordingly, communications between the target and host may be performed in real time or near real time. These communications may be used for debugging purposes or more generally, for any purpose, including purposes unrelated to debugging. | 2015-08-27 |
20150242294 | Network Test System - A test system and a related method, the system comprising a test processing agent and local test device(s). The test processing agent processes test measurements related to a network-under-test into test results. The test processing agent is decoupled from the network-under-test, e.g., by being reachable through a network communication link distinct from the network-under-test. The local test device comprises a firmware module and a network interface (NI) module. The firmware module depends on external instructions for initiating a test sequence on the network-under-test. The NI module comprises at least one physical port connectable to the network-under-test. The physical port is used for initiating the test sequence. The test processing agent receives the test measurements following the initiation of the test sequence by the local test device and allows access to the test results. | 2015-08-27 |