38th week of 2014 patent applcation highlights part 233 |
Patent application number | Title | Published |
20140282505 | SYSTEMS AND METHODS FOR DEPLOYING AN APPLICATION AND AN AGENT ON A CUSTOMER SERVER IN A SELECTED NETWORK - Information indicating a location of a disk image of a virtual machine hosted on a server is received. The virtual machine is deactivated. The server is instructed to mount the disk image. A static route pointing to a selected network is added to a static routing table on a file system associated with the virtual machine. The server is instructed to dismount the disk image. The virtual machine is activated. | 2014-09-18 |
20140282506 | ENCAPSULATION OF AN APPLICATION FOR VIRTUALIZATION - Embodiments relate to a computer system comprising a service layer controller. The computer system comprises a ring interface unit configured to provide access to a host system that enables access to a plurality of virtual machines (VMs). The computer system comprises a hardware application configured to be encapsulated by the service layer controller such that the hardware application communicates to the host system via interfaces controlled by the ring interface unit and service layer controller. | 2014-09-18 |
20140282507 | SYSTEMS AND METHODS OF USING A HYPERVISOR WITH GUEST OPERATING SYSTEMS AND VIRTUAL PROCESSORS - An apparatus includes a processor and a guest operating system. In response to receiving a request to create a task, the guest operating system requests a hypervisor to create a virtual processor to execute the requested task. The virtual processor is schedulable on the processor. | 2014-09-18 |
20140282508 | SYSTEMS AND METHODS OF EXECUTING MULTIPLE HYPERVISORS - An apparatus includes a primary hypervisor that is executable on a first set of processors and a secondary hypervisor that is executable on a second set of processors. The primary hypervisor may define settings of a resource and the secondary hypervisor may use the resource based on the settings defined by the primary hypervisor. For example, the primary hypervisor may program memory address translation mappings for the secondary hypervisor. The primary hypervisor and the secondary hypervisor may include their own schedulers. | 2014-09-18 |
20140282509 | MANAGING AN INDEPENDENT VIRTUAL DISK - A computer-implemented method for managing an independent virtual disk. The method includes creating an independent virtual disk; in response to the creating the independent virtual disk, creating a first virtual machine; and attaching an independent virtual disk to the first virtual machine; and managing the independent virtual disk by controlling the first virtual machine that is attached to the independent virtual disk. | 2014-09-18 |
20140282510 | SERVICE BRIDGES - Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for service bridges. In one aspect, a method includes a host operating system performs operations comprising: receiving, using one or more service bridges that execute in the host operating system, a plurality of requests from the one or more virtual machines, wherein each service bridge is associated with a different virtual machine of the one or more virtual machines, and wherein each request is a request to interface with one or more external services; modifying, using a respective service bridge, each request to be processed by the one or more external services; and providing each modified request from the respective service bridge to the one or more external services, where the respective service bridge communicates with the one or more external services over a network. | 2014-09-18 |
20140282511 | PRESERVING AN INDEPENDENT VIRTUAL DISK - A computer-implemented method for preserving an independent virtual disk. The method, includes attaching an independent virtual disk to a first virtual machine, and preserving said independent virtual disk when the independent virtual disk is detached from the first virtual machine. | 2014-09-18 |
20140282512 | ZONE MANAGEMENT OF COMPUTE-CENTRIC OBJECT STORES - Zone management of compute-based object stores is provided herein. An exemplary method may include assigning a virtual operating system container from the reserve zone pool to a task group, the task group including a set of tasks for a phase of a first request, and executing the set of tasks within the assigned virtual operating system container. | 2014-09-18 |
20140282513 | INSTRUCTION SET ARCHITECTURE FOR COMPUTE-BASED OBJECT STORES - Instruction set architectures for compute-centric object stores. An exemplary method may include receiving a request from a user, the request identifying parameters of a compute operation that is to be executed against one or more objects in a distributed object store, generating a set of tasks from the request that comprise instructions for a daemon, locating the one or more objects within the distributed object store, the one or more objects being stored on a physical node. The method includes providing the set of tasks to a daemon, the daemon controlling execution of the compute operation by a virtual operating system container based upon the set of tasks, and storing an output of the virtual operating system container in the distributed object store. | 2014-09-18 |
20140282514 | VIRTUALIZATION SUPPORT FOR STORAGE DEVICES - Techniques are disclosed relating to enabling virtual machines to access data on a physical recording medium. In one embodiment, a computing system provides a logical address space for a storage device to an allocation agent that is executable to allocate the logical address space to a plurality of virtual machines having access to the storage device. In such an embodiment, the logical address space is larger than a physical address space of the storage device. The computing system may then process a storage request from one of the plurality of virtual machines. In some embodiments, the allocation agent is a hypervisor executing on the computing system. In some embodiments, the computing system tracks utilizations of the storage device by the plurality of virtual machines, and based on the utilizations, enforces a quality of service level associated with one or more of the plurality of virtual machines. | 2014-09-18 |
20140282515 | REFRESHING MEMORY TOPOLOGY IN VIRTUAL MACHINE OPERATING SYSTEMS - According to one aspect of the present disclosure a system and technique for refreshing memory topology in virtual machine operating systems is disclosed. The system includes a processor and logic executable by the processor to: responsive to receiving, by an operating system of a virtual machine, a notification of an affinity change relative to workload memory resources, poll a hypervisor for updated memory affinity data; determine, for each logical memory block of the workload memory resources, whether an affinity string for the respective logical memory block has changed; responsive to determining that the affinity string for the respective logical memory block has changed, identify a data structure of the logical memory block maintained by the operating system; and update affinity information in the data structure based on the change to the affinity string of the logical memory block. | 2014-09-18 |
20140282516 | PROVIDING EXECUTION ACCESS TO FILES NOT INSTALLED IN A VIRTUALIZED SPACE - Provided are techniques for providing a virtual machine (VM) workload partition (WPAR) with an versioned operating system (OS) that is different than a native OS associated with a logical partition (LPAR) corresponding to the WPAR, wherein the versioned OS is an earlier version of the native OS; detecting an executable file associated with the versioned OS that has been designated to be overlaid with a corresponding executable from the native OS; generating a link to the corresponding executable; and installing the link in the WPAR rather than the executable file. | 2014-09-18 |
20140282517 | APPLYING AND REMOVING APPROPRIATE FILE OVERLAYS DURING LIVE APPLICATION MOBILITY - Provided are techniques for moving, in conjunction with live application mobility, a virtual machine (VM) workload partition (WPAR) on a first logical partition (LPAR) running on a first operating system (OS) to second a LPAR running a second OS, wherein the first OS is a different version than the second OS; the moving comprising, in response to a determination that that the second OS is a newer version of the first OS: determining a set of overlays associated with the WPAR corresponding to the second OS; removing from the WPAR a set of overlays associated with the WPAR corresponding to the first OS; and applying to the WPAR the set of overlays corresponding to the second OS. | 2014-09-18 |
20140282518 | ENFORCING POLICY-BASED COMPLIANCE OF VIRTUAL MACHINE IMAGE CONFIGURATIONS - Techniques are disclosed for data risk management in accessing an Infrastructure as a Service (IaaS) cloud network. More specifically, embodiments of the invention evaluate virtual machine images launched in cloud-based environments for compliance with a policy. After intercepting a virtual machine image launch request, an intermediary policy management engine determines whether the request conforms to a policy defined by a policy manager, e.g., an enterprise's information security officer. The policy may be based on user identities, virtual machine image attributes, data classifications, or other criteria. Upon determining whether the request conforms to policy, the policy management engine allows the request, blocks the request, or triggers a management approval workflow. | 2014-09-18 |
20140282519 | MANAGING A SERVER TEMPLATE - A non-transitory computer-readable storage medium may comprise instructions for managing a server template stored thereon. When executed by at least one processor, the instructions may be configured to cause at least one computing system to at least convert the server template to a corresponding virtual machine, manage the corresponding virtual machine, and convert the corresponding virtual machine back into a template format. | 2014-09-18 |
20140282520 | PROVISIONING VIRTUAL MACHINES ON A PHYSICAL INFRASTRUCTURE - Example methods and systems provide for the provisioning of virtual machines on a physical infrastructure based on actual past resource usage of a plurality of virtual machines currently deployed on the physical infrastructure. Upon receiving a request for a new virtual machine based on specified resource requirements, actual usage data that indicate past resource usage of the plurality of current virtual machines are accessed, and provisioning parameters for the new virtual machine are calculated based at least in part on the actual usage data and the specified resource requirements. | 2014-09-18 |
20140282521 | EXPANSION OF SERVICES FOR A VIRTUAL DATA CENTER GUEST - One or more services for enhancing guest utilization of a virtual machine and other VDC resources may be provided at the intermediary manager. In an embodiment, the intermediary manager intercepts a hypercall from a guest operating system that is separate from the intermediary manager. The intermediary manager determines that a particular intermediary service is associated with the hypercall and causes execution of service instructions associated with the particular intermediary service. The intermediary manager and guest operating systems may operate within a virtual machine hosted by a host machine and managed by a hypervisor. Embodiments may be useful in any of a virtualized enterprise computer system; a virtual machine infrastructure in a private data center; computing, storage or networking resources in a private cloud; computing, storage or networking resources of cloud service provider; and a hybrid cloud computing environment. | 2014-09-18 |
20140282522 | ACTIVITY INITIATED VIRTUAL MACHINE MIGRATION - Briefly, embodiments of methods or systems for activity initiated virtual machine migration are disclosed. | 2014-09-18 |
20140282523 | SCALABLE POLICY MANAGEMENT IN AN EDGE VIRTUAL BRIDGING (EVB) ENVIRONMENT - Embodiments of the invention relate to scalable policy management in an edge virtual bridging (EVB) environment. One embodiment includes a system including a physical end station including a hypervisor, wherein the physical end station creates at least one virtual machine (VM). A virtual station interface (VSI) database is coupled to a VM manager server. The VSI database stores policy information comprising one or more rules for different VM types and access rules. A policy management module is coupled to a switch adjacent to the physical end station. The policy management module generates a first table using at least a portion of the policy information, generates a second table with a portion of VM information received from the hypervisor for the VM, and uses the first table and the second table to retrieve and apply rules for the VM. | 2014-09-18 |
20140282524 | SCALABLE POLICY ASSIGNMENT IN AN EDGE VIRTUAL BRIDGING (EVB) ENVIRONMENT - Embodiments of the invention relate to scalable policy assignment in an edge virtual bridging (EVB) environment. One embodiment includes a system including a physical end station includes a hypervisor. The physical end station creates at least one virtual machine (VM). A virtual station interface (VSI) database (DB) is coupled to a VM manager server. The VSI DB stores policy information and bandwidth filter information. A policy assignment module is coupled to a switch adjacent to the physical end station. The policy assignment module generates a VSI DB table with at least a portion of the VSI DB information from the VSI DB and a policy discriminator (PD) value for each VSI type ID. | 2014-09-18 |
20140282525 | CREATING, PROVISIONING AND MANAGING VIRTUAL DATA CENTERS - A cloud services brokerage platform system includes a virtual data center (VDC) and an architecture management interface. The virtual data center (VDC) includes a plurality of resource groups. Each one of the resource groups includes one or more VDC resources. Each one of the VDC resources is associated with a respective set of resource group specification parameters. The architecture management interface enables an architectural layout of the one or more VDC resources to be displayed. The architectural layout includes a visual depiction of the one or more VDC resources of each one of the resource groups. An arrangement of the visual depiction is dependent upon the respective set of resource group specification parameters. | 2014-09-18 |
20140282526 | MANAGING AND CONTROLLING A DISTRIBUTED NETWORK SERVICE PLATFORM - A distributed network service platform comprises: a logical data plane configured to process packets that are received by a plurality of physical devices, transmitted by the plurality of physical devices, or both, the logical data plane being physically distributed on the plurality of physical devices; and a logical control plane configured to manage and control the logical data plane, the logical control plane comprising one or more physical control planes operating on one or more physical devices. | 2014-09-18 |
20140282527 | Applying or Removing Appropriate File Overlays During Live Application Mobility - Provided are techniques for moving, in conjunction with live application mobility, a virtual machine (VM) workload partition (WPAR) on a first logical partition (LPAR) running on a first operating system (OS) to second a LPAR running a second OS, wherein the first OS is a different version than the second OS; the moving comprising, in response to a determination that the second OS is a newer version of the first OS: determining a set of overlays associated with the WPAR corresponding to the second OS; removing from the WPAR a set of overlays associated with the WPAR corresponding to the first OS; and applying to the WPAR the set of overlays corresponding to the second OS. | 2014-09-18 |
20140282528 | Virtualization Congestion Control Framework - Novel tools and techniques for implementing a virtualization congestion control framework. In one aspect, an orchestrator might be provided within a virtual machine environment context in order to provide two-way communications between the virtual machine (“VM”) and one or more applications running on one or more virtual machines in the VM environment in order to control congestion in hardware resource usage, perhaps using a congestion API. In some embodiments, the two-way communications might include communications from the VM to the applications including maximum hardware resources and current resources, and might further include communications from the applications to the VM including pre-congestion notifications and low-utilization notifications. According to some embodiments, a buffer utilization feedback may be provided between the VM and the applications, said buffer utilization feedback allowing the applications to control pushback mechanisms, said pushback mechanisms including mechanisms for pushing back on or decreasing hardware resource usage. | 2014-09-18 |
20140282529 | Virtualization Congestion Control Framework - Novel tools and techniques are provided for implementing a virtualization congestion control framework. In one aspect, a method might include a hypervisor assigning application resources of a virtual machine (“VM”), which operates on a host computing system, with maximum allowable settings to each software application to be executed on the VM. The hypervisor or an orchestrator might determine a running mode of the host computing system, and might execute the software application(s) using running mode attributes of the determined running mode. The hypervisor or the orchestrator might monitor application resource utilization, and, based on a determination that application resource utilization has changed, might modify allocation of application resources to each of the software application(s). In some cases, the hypervisor or the orchestrator might monitor for mass congestion indicators, and, based on a determination that a mass congestion indicator is present, might modify the running mode of the host computing system. | 2014-09-18 |
20140282530 | REFRESHING MEMORY TOPOLOGY IN VIRTUAL MACHINE OPERATING SYSTEMS - According to one aspect of the present disclosure, a method and technique for refreshing memory topology in virtual machine operating systems is disclosed. The method includes: responsive to receiving, by an operating system of a virtual machine, a notification of an affinity change relative to workload memory resources, polling a hypervisor for updated memory affinity data; determining, for each logical memory block of the workload memory resources, whether an affinity string for the respective logical memory block has changed; responsive to determining that the affinity string for the respective logical memory block has changed, identifying a data structure of the logical memory block maintained by the operating system; and updating affinity information in the data structure based on the change to the affinity string of the logical memory block. | 2014-09-18 |
20140282531 | SCALABLE POLICY MANAGEMENT IN AN EDGE VIRTUAL BRIDGING (EVB) ENVIRONMENT - Embodiments of the invention relate to scalable policy management in an edge virtual bridging (EVB) environment. One embodiment includes fetching information from a virtual station interface (VSI) database. A first table is generated with at least a portion of the information from the VSI database. A message is received including virtual machine (VM) information for a created VM. A second table is generated including at least a portion of the VM information. A VM identification (ID) is retrieved based on VM type from the first table. Rules associated with the retrieved VM ID are retrieved from the second table. The associated rules for the VM are applied. | 2014-09-18 |
20140282532 | SCALABLE POLICY ASSIGNMENT IN AN EDGE VIRTUAL BRIDGING (EVB) ENVIRONMENT - Embodiments of the invention relate to scalable policy assignment in an edge virtual bridging (EVB) environment. One embodiment includes fetching virtual machine (VM) information for one or more VMs from a virtual station interface (VSI) database (DB). The VM information includes a VSI type identification (ID) associated with each VM. A policy discriminator (PD) value is associated for each VSI type ID. A VSI DB table is generated with at least a portion of the VM information from the VSI DB and the PD for each VSI type ID. A message is received including virtual machine (VM) information for a created VM. One or more rules and bandwidth filter information associated with a VSI type ID are retrieved from the VSI DB table. The associated rules and filter information are applied based on the PD. | 2014-09-18 |
20140282533 | VIRTUAL COMPUTER SYSTEM - When changing the speed of the progression of a logical time in a paravirtualized OS, a hypervisor updates reference time and a reference counter value which is the value of a counter when the reference time is updated, to be used for time calculation by the paravirtualized OS, in accordance with the changed speed of the progression of time, to have new reference time and a new reference counter value. After that, the paravirtualized OS calculates the present time based on the new reference time and the new reference counter value. This can serve to maintain the continuity of time in the paravirtualized OS through before and after a change in the speed of the progression of time if made in the progression of time. | 2014-09-18 |
20140282534 | VIRTUAL ENVIRONMENT HAVING HARVARD ARCHITECTURE - Methods, systems, and apparatus, including computer programs encoded on computer storage media, relating to software execution. One of the methods includes executing, on a computer including a single memory for storing data and instructions, a virtual environment including a data memory and an instruction memory, the instruction memory configured to be unreadable by instructions stored in the instruction memory; receiving, at the virtual environment, a software module comprising multiple instructions; and performing validation of the software module including: identifying, in the software module one or more calls to the single memory; and verifying that the one or more calls to the single memory are in the data memory. | 2014-09-18 |
20140282535 | Unknown - The present invention describes a distributed operating system that allows any local operating system to run more than one cloud-hosted virtual machine. The described system uses three different server clusters: one for storing, one for general processing and other for image processing. The processed image is sent to the user over the network, all the user needs is a screen to display the final image and an input terminal as a touch screen or a mouse and keyboard. | 2014-09-18 |
20140282536 | METHOD, SYSTEM AND COMPUTER READABLE MEDIUM FOR PROVISIONING CLOUD RESOURCES - A non-transitory computer-readable storage medium has tangibly embodied thereon and accessible therefrom instructions interpretable by at least one data processing device. The instructions are configured for causing the at least one data processing device to perform a method for provisioning cloud resources. The method comprises creating an instantiation of a cloud service resource; associating the cloud service resource with each one of a virtual data center, a cloud resource application, a cloud resource application environment, and a cloud resource architectural layer; and provisioning the cloud service resource with at least one instance of a virtual machine. | 2014-09-18 |
20140282537 | VIRTUAL MACHINE IMAGE DISK USAGE - The invention relates to a method for managing virtual machine image disk usage comprising a disk image emulator for a virtual machine provided by a hypervisor, comprising the steps of providing at least a first disk image comprising a sequence of data blocks for accumulating write operations to the first disk image, providing at least a second disk image comprising a sequence of data blocks for permanently storing disk image data, and providing a disk cleaning process for transferring disk image data from the first disk image to the second disk image and deleting unused data blocks in the first and/or the second disk image. | 2014-09-18 |
20140282538 | MINIMIZING SCSI LIMITATIONS FOR VIRTUAL MACHINES - Examples disclosed herein provide systems, methods, and software for minimizing Small Computer System Interface (SCSI) limitations on virtual machines are disclosed herein. In one example, a method of operating a volume combining system to combine volumes for a virtual machine includes identifying two or more volumes to be attached to the virtual machine. The method further provides combining the two or more volumes into a single volume, and attaching the single volume to the virtual machine. | 2014-09-18 |
20140282539 | WRAPPED NESTED VIRTUALIZATION - A number of embodiments can include a Layer 0 (L0) VMM configured to provide a first number of services and a Layer 1 (L1) virtual machine (VM) that is running on the L0 VMM. A number of embodiments can also include a L1 VMM that is running on the L1 VM. A number of embodiments can include configuring the L1 VMM to provide a second number of services to a target VM, second number of services being different than the first number of services. A number of embodiments can also include configuring the target VM to execute a user application. | 2014-09-18 |
20140282540 | PERFORMANT HOST SELECTION FOR VIRTUALIZATION CENTERS - A host for a virtual machine is selected by first electronically receiving (i) a virtual-machine allocation request for resources in a cluster of servers upon which a plurality of virtual machines are executing and (ii) performance data related to the execution of the plurality of virtual machines. The effect of executing a new virtual machine associated with the request on each server using on the gathered performance data is simulated, and a server is selected based on a result of the simulation; the new virtual machine is caused to execute on the selected server. | 2014-09-18 |
20140282541 | FEEDBACK SYSTEM FOR OPTIMIZING THE ALLOCATION OF RESOURCES IN A DATA CENTER - To improve resource utilization and reduce the virtual machine sprawl in a data center, resource utilization is predicted based on previously measured utilizations, and then, using the predicted utilizations, optimizing the allocation of the computing resources among the virtual machines in the data center. In operation, measurements related to resource utilization by different virtual machines executing in a data center are collected at regular intervals. At each interval, an optimization system predicts virtual machine resource utilizations based on previously collected measurements and previously-generated virtual machine modelers. Based on the utilization predictions as well as the physical topology of the data center, the optimization system identifies different optimizations to the virtual machine topology for the next interval. | 2014-09-18 |
20140282542 | Hypervisor Storage Intercept Method - Two levels of address masquerading are employed to make a virtual appliance a transparent gateway between a hypervisor and a storage controller. This approach allows a virtual appliance to be inserted or removed from the IP storage path of a hypervisor without disrupting communications. One embodiment of the invention enables a virtual appliance to intercept, manipulate, reprioritize, or otherwise affect IP (Internet Protocol) storage protocols sent or received between a hypervisor and storage controller(s). | 2014-09-18 |
20140282543 | SECURE ZONE ON A VIRUTAL MACHINE FOR DIGITAL COMMUNICATIONS - An apparatus implementing a secure zone on one or more virtual machines may be provided. In one aspect, the apparatus may comprise a screen and a computer processor. The computer processor may be configured to initialize a hypervisor, establish a first virtual machine under the control of the hypervisor and execute code for a secure zone thereon, and establish a second virtual machine under the control of the hypervisor and execute code for a non-secure zone thereon. The code for the secure zone may be configured to initiate executing a task, and to assume control over an output to the screen while the apparatus is operating in a secure mode and to transfer control over the output to the non-secure zone while the apparatus is operating in a non-secure mode. The hypervisor may be configured to grant requests from the secure zone to assume and transfer control over the output. | 2014-09-18 |
20140282544 | Apparatus, Method, And System To Dynamically Deploy Wireless Infrastructure - CRYSTAL “Cognitive radio you share, trust and access locally” (CRYSTAL) is a virtualized cognitive access point that may provide for combining multiple wireless access applications on a single hardware platform. Radio technologies such as LTE, WiMax, GSM, and the like can be supported. CRYSTAL platforms can be aggregated and managed as a cloud, which provides a model for access point sharing, control, and management. CRYSTAL may be used for scenarios such as neighborhood spectrum management. CRYSTAL security features allow for home/residential as well as private infrastructure implementations. | 2014-09-18 |
20140282545 | SYSTEM AND METHOD FOR GENERIC PRODUCT WIRING IN A VIRTUAL ASSEMBLY BUILDER ENVIRONMENT - Described herein is a system and method for generic product wiring in a cloud environment. In accordance with an embodiment, a virtual assembly builder can be used to virtualize installed components in a reference environment, and then deploy those components into another destination environment. A user can capture the configuration and binaries of software components into software appliance artifacts, which can be grouped and their relationships defined as software assembly artifacts. In accordance with an embodiment, a generic product introspector plugin allows users to specify at introspection, during creation of a virtual assembly, one or more metadata properties to be exposed for editing and configuration by scripts, during a subsequent rehydration of the virtual assembly. The properties exposed for editing and configuration by scripts can be used during instantiation of an instance of the assembly to define one or more inputs and outputs for the instance. | 2014-09-18 |
20140282546 | METHODS, SYSTEMS AND APPARATUS FOR SUPPORTING WIDE AND EFFICIENT FRONT-END OPERATION WITH GUEST-ARCHITECTURE EMULATION - Methods for supporting wide and efficient front-end operation with guest architecture emulation are disclosed. As a part of a method for supporting wide and efficient front-end operation, upon receiving a request to fetch a first far taken branch instruction, a cache line that includes the first far taken branch instruction, a next cache line and a cache line located at the target of the first far taken branch instruction is read. Based on information that is accessed from a data table, the cache line and either the next cache line or the cache line located at the target is fetched in a single cycle. | 2014-09-18 |
20140282547 | EXTENDING FUNCTIONALITY OF LEGACY SERVICES IN COMPUTING SYSTEM ENVIRONMENT - Methods and apparatus involve extending functionality of legacy services. A legacy application has functionality designed for use on an original computing device. In a modern environment, virtual machines (VMs) operate as independent guests on processors and memory by way of scheduling control from a virtualization layer (e.g., hypervisor). At least one VM is provisioned to modify standard entry points of the original legacy application for new accessing of various system functions of the hardware platform. Representative functions include network access, processors, and storage. Policy decision points variously located are further employed to ensure compliance with computing policies. Multiple platforms and computing clouds are contemplated as are VMs in support roles and dedicated software appliances. In this manner, continued use of legacy services in modern situations allows participation in more capable environments and application capabilities heretofore unimagined. Other embodiments contemplate computing systems and computer program products, to name a few. | 2014-09-18 |
20140282548 | SYSTEM AND METHOD TO RECONFIGURE A VIRTUAL MACHINE IMAGE SUITABLE FOR CLOUD DEPLOYMENT - A system and method for reconfiguring a virtual server image suitable for cloud deployment. In accordance with an embodiment, the system comprises providing a virtual server image, which can be executed on one or a plurality of hypervisors, and which contains a bootable part of a virtual machine, a non-bootable part of the virtual machine, a software application code for a software application, and a software application data for the software application. Information in a virtual server image patch can be used to reconfigure the contents of the virtual server image from its original content to a reconfigured content, to create a reconfigured virtual server image. In a particular embodiment, the virtual machine can be a Java Virtual Machine. | 2014-09-18 |
20140282549 | SERVICE VIRTUAL MACHINE - Technology is disclosed for processing in a computer program a request received by a service virtual machine (SVM). The technology can receive a request in either a first form or a second form, wherein the first form includes a target textual identifier, a reply-to textual identifier, and a parameter, and the second form includes a target textual identifier and a parameter, but not a reply-to textual identifier; identify, based on the received target textual identifier, a procedure; invoke the identified procedure and providing a value of the received parameter to the invoked procedure; in an event the received request is in the first form: receive a result from the invoked procedure; form a reply-to request in the second form, the second form including as a target textual identifier the reply-to textual identifier in the received request, and as a parameter the result received from the invoked procedure, further wherein the second form does not include a reply-to textual identifier; and send, to the SVM, the formed reply-to request. | 2014-09-18 |
20140282550 | Meter Reading Data Validation - A meter data management (MDM) system processes imported blocks of utility data collected from a plurality of utility meters, sensors, and/or control devices by using independent parallel pipelines associated with differing processing requirements of the data blocks. The MDM system determines processing requirements for each of the imported data blocks, selects one of the pipelines that matches the determined processing requirements for each of the imported data blocks, and directs the data blocks to the selected one of the pipelines for processing. The pipelines may include a validation pipeline for validation processing, an estimation pipeline for estimation processing and a work item pipeline for work item processing. | 2014-09-18 |
20140282551 | NETWORK VIRTUALIZATION VIA I/O INTERFACE - Network virtualization can be provided via network I/O interfaces, which may be partially or fully aware of the virtualization. Network virtualization can be reflected in the use of a first header and an additional header(s) for a data frame. A partially-aware transmit example can gather together data frame components, including its additional header(s), via a work queue entry. A fully-aware transmit example can refer to a transmit-side table to gather its additional header(s) and can track the state of its additional header(s) stored in a cache. A partially-aware receive example can handle an additional header(s), e.g., by writing it to host-memory. A fully-aware receive example can determine values from multiple headers (including its additional header(s)) to further determine where to write a data payload to host-memory. The examples can relieve a host's hypervisor from performing all the network virtualization processing. The fully-aware examples can incorporate JOY techniques. | 2014-09-18 |
20140282552 | SOFTWARE INTERFACE FOR A SPECIALIZED HARDWARD DEVICE - Embodiments of the disclosure include methods, systems and computer program products for performing a data manipulation function. The method includes receiving, by a processor, a request from an application to perform the data manipulation function and based on determining that a specialized hardware device configured to perform the data manipulation function is available, the method includes determining if executing the request on the specialized hardware device is viable. Based on determining that the request is viable to execute on the specialized hardware device, the method includes executing the request on the specialized hardware device. | 2014-09-18 |
20140282553 | META-APPLICATION MANAGEMENT IN A MULTITASKING ENVIRONMENT - Techniques are disclosed to identify concurrently used applications based on application state. Upon determining that usage of a plurality of applications, including a first state of a first application of the plurality of applications, satisfies a criterion for identifying concurrently used applications, the plurality of applications is designated as a first meta-application having a uniquely identifiable set of concurrently used applications. The first meta-application has an associated criterion for launching the first meta-application. Upon determining that the criterion for launching the first meta-application is satisfied, at least one of the plurality of applications is programmatically invoked. | 2014-09-18 |
20140282554 | COMMUNICATION APPARATUS AND COMMUNICATION METHOD - In a communication apparatus, a communication processor rebuilds, with switching of communication systems, a communication bearer to perform communication. An application processor outputs, when background communication occurs or a display unit is shifted from an off state to an on state while notification from the communication processor is stopped, a request signal to the communication processor. The application processor starts the background communication based on information of a latest communication bearer output from the communication processor in response to the request signal. | 2014-09-18 |
20140282555 | Ensuring Determinism During Programmatic Replay in a Virtual Machine - Aspects of an application program's execution which might be subject to non-determinism are performed in a deterministic manner while the application program's execution is being recorded in a virtual machine environment so that the application program's behavior, when played back in that virtual machine environment, will duplicate the behavior that the application program exhibited when originally executed and recorded. Techniques disclosed herein take advantage of the recognition that only minimal data needs to be recorded in relation to the execution of deterministic operations, which actually can be repeated “verbatim” during replay, and that more highly detailed data should be recorded only in relation to non-deterministic operations, so that those non-deterministic operations can be deterministically simulated (rather than attempting to re-execute those operations under circumstances where the outcome of the re-execution might differ) based on the detailed data during replay. | 2014-09-18 |
20140282556 | METHODS AND SYSTEMS FOR BATCH PROCESSING IN AN ON-DEMAND SERVICE ENVIRONMENT - In accordance with embodiments disclosed herein, there are provided mechanisms and methods for batch processing in an on-demand service environment. For example, in one embodiment, mechanisms include receiving a processing request for a multi-tenant database, in which the processing request specifies processing logic and a processing target group within the multi-tenant database. Such an embodiment further includes dividing or chunking the processing target group into a plurality of processing target sub-groups, queuing the processing request with a batch processing queue for the multi-tenant database among a plurality of previously queued processing requests, and releasing each of the plurality of processing target sub-groups for processing in the multi-tenant database via the processing logic at one or more times specified by the batch processing queue. | 2014-09-18 |
20140282557 | Responding To A Timeout Of A Message In A Parallel Computer - Methods, apparatuses, and computer program products for responding to a timeout of a message in a parallel computer are provided. The parallel computer includes a plurality of compute nodes operatively coupled for data communications over one or more data communications networks. Each compute node includes one or more tasks. Embodiments include a first task of a first node sending a message to a second task on a second node. Embodiments also include the first task sending to the second node a command via a parallel operating environment (POE) in response to a timeout of the message. The command instructs the second node to perform a timeout motivated operation. | 2014-09-18 |
20140282558 | SERIALIZING WRAPPING TRACE BUFFER VIA A COMPARE-AND-SWAP INSTRUCTION - Embodiments of the disclosure serializing wrapping of a circularly wrapping trace buffer via a compare-and-swap (CS) instruction by a method including executing a CS loop to advance to a location in the buffer indicated by a next free pointer. The method also includes incrementing a master wrap sequence number each time the next free pointer returns to a top of the buffer and executing another CS loop to increment a wrap number stored in a trace block corresponding to the location indicated by the next free pointer. Based upon determining that the wrap number stored in the trace block is one less than or equal to the master wrap sequence number, the method includes reserving space in a buffer associated with the trace block and storing the wrap number stored in the trace block as an old wrap number and incrementing a use-count of the trace block. | 2014-09-18 |
20140282559 | COMPUTING SYSTEM WITH TASK TRANSFER MECHANISM AND METHOD OF OPERATION THEREOF - A computing system includes: a status module configured to determine a process profile for capturing a pause point in processing a task; a content module, coupled to the status module, configured to identify a process content for capturing the pause point; an upload module, coupled to the content module, configured to store the process profile and the process content; and a trigger synthesis module, coupled to the upload module, configured to generate a resumption-trigger with a control unit when storing the process profile and the process content for resuming the task from the pause point and for displaying on a device. | 2014-09-18 |
20140282560 | Mapping Network Applications to a Hybrid Programmable Many-Core Device - A hybrid programmable logic is described that performs packet processing functions on received data packets using programmable logic elements, and processors interleaved with the programmable logic elements. The header data may be scheduled for distribution to processing threads associated with the processors by the programmable logic elements. The processors may perform packet processing functions on the header data using both the processing threads and hardware acceleration functions provided by the programmable logic elements. | 2014-09-18 |
20140282561 | COMPUTER SYSTEMS AND METHODS WITH RESOURCE TRANSFER HINT INSTRUCTION - A processing system includes a processor configured to execute a plurality of instructions corresponding to a task, wherein the plurality of instructions comprises a resource transfer instruction to indicate a transfer of processing operations of the task from the processor to a different resource and a hint instruction which precedes the resource transfer instruction by a set of instructions within the plurality of instructions. A processor task scheduler is configured to schedule tasks to the processor, wherein, in response to execution of the hint instruction of the task, the processor task scheduler finalizes selection of a next task and loads a context of the selected next task into a background register file. The loading occurs concurrently with execution of the set of instructions between the hint instruction and resource transfer instruction, and, after loading is completed, the processor switches to the selected task in response to the resource transfer instruction. | 2014-09-18 |
20140282562 | FAST AND SCALABLE CONCURRENT QUEUING SYSTEM - This disclosure is directed to a fast and scalable concurrent queuing system. A device may comprise, for example, at least a memory module and a processing module. The memory module may be to store a queue comprising at least a head and a tail. The processing module may be to execute at least one thread desiring to enqueue at least one new node to the queue, enqueue the at least one new node to the queue, a first state being observed based on information in the tail identifying a predecessor node when the at least one new node is enqueued, observe a second state based on the predecessor node, determine if the predecessor node has changed based on comparing the first state to the second state, and set ordering in the queue based on the determination. | 2014-09-18 |
20140282563 | DEPLOYING PARALLEL DATA INTEGRATION APPLICATIONS TO DISTRIBUTED COMPUTING ENVIRONMENTS - System, method, and computer program product to process parallel computing tasks on a distributed computing system, by computing an execution plan for a parallel computing job to be executed on the distributed computing system, the distributed computing system comprising a plurality of compute nodes, generating, based on the execution plan, an ordered set of tasks, the ordered set of tasks comprising: (i) configuration tasks, and (ii) execution tasks for executing the parallel computing job on the distributed computing system, and launching a distributed computing application to assign the tasks of the ordered set of tasks to the plurality of compute nodes to execute the parallel computing job on the distributed computing system. | 2014-09-18 |
20140282564 | THREAD-SUSPENDING EXECUTION BARRIER - An energy-efficient execution barrier for parallel processing is provided. The execution barrier associates a thread-execution bit with each hardware-supported thread. The energy-efficient execution barrier utilizes a per-processor or per-chip bit vector register, having, for example, one bit per possible thread. A bit enables or disables the execution of its corresponding thread. A process starts by forking threads and enabling them in the bit vector register. When a thread arrives at the barrier/rendezvous, the thread disables its own bit and therefore suspends thread execution. When a distinguished thread arrives at the barrier, it waits (e.g., spinlocks) until all the threads needed for the rendezvous are disabled. The distinguished thread (or an automatic thread re-enable mechanism) then atomically sets all threads bits in the bit vector register to enabled, and the threads perform any appropriate sync operations and continue. | 2014-09-18 |
20140282565 | Processor Scheduling With Thread Performance Estimation On Core Of Different Type - A processor is described having an out-of-order core to execute a first thread and a non-out-of-order core to execute a second thread. The processor also includes statistics collection circuitry to support calculation of the following: the first thread's performance on the out-of-order core; an estimate of the first thread's performance on the non-out-of-order core; the second thread's performance on the non-out-of-order core; an estimate of the second thread's performance on the out-of-order core. | 2014-09-18 |
20140282566 | SYSTEM AND METHOD FOR HARDWARE SCHEDULING OF INDEXED BARRIERS - A method and a system are provided for hardware scheduling of indexed barrier instructions. Execution of a plurality of threads to process instructions of a program that includes a barrier instruction is initiated and when each thread reaches the barrier instruction, the thread pauses execution of the instructions. A first sub-group of threads in the plurality of threads is associated with a first sub-barrier index and a second sub-group of threads in the plurality of threads is associated with a second sub-barrier index. When the barrier instruction can be scheduled for execution, threads in the first sub-group are executed serially and threads in the second sub-group are executed serially and at least one thread in the first sub-group is executed in parallel with at least one thread in the second sub-group. | 2014-09-18 |
20140282567 | TASK SCHEDULING BASED ON USER INTERACTION - Provided herein are systems, methods, and software for implementing information management applications. In an implementation, at least a portion of an information management application is embodied in program instructions that include various task modules and a scheduler module. In some implementations the program instructions are written in accordance with a single threaded programming language, such as JavaScript or any other suitable single threaded language. When executed, each task module returns control to the scheduler module upon completing. The scheduler module identifies to which of the plurality of task modules to grant control based at least in part on a relevance of each task module to a user interaction. | 2014-09-18 |
20140282568 | Dynamic Library Replacement - Provided are techniques for an OS to be modified on a running system such that running programs, including system services, so not have to be stopped and restarted for the modification to take effect. The techniques include detecting, by a processing thread, when the processing thread has entered a shared library; in response to the detecting, setting a thread flag corresponding to the thread in an operating system (OS); detecting an OS flag, set by the OS, indicating that the OS is updating the shared library; in response to detecting the OS flag, suspending processing by the processing thread and transferring control from the thread to the OS; resuming processing by the processing thread in response to detecting that the OS has completed the updating; and executing the shared library in response to the resuming. | 2014-09-18 |
20140282569 | INFORMATION PROCESSING DEVICE, NETWORK SYSTEM, PROCESSING EXECUTION METHOD, AND PROCESSING EXECUTION COMPUTER PROGRAM PRODUCT - An information processing device includes: a reception unit that receives a workflow definition specifying processing; a rule acquisition unit that acquires, regarding the processing, a workflow rule capable of setting therein a parameter indicating which processing is to be executed; a setting unit that sets the parameter of the workflow rule based on the workflow definition; and an execution control unit that controls execution of the processing in accordance with the workflow rule in which the parameter is set. | 2014-09-18 |
20140282570 | DYNAMIC CONSTRUCTION AND MANAGEMENT OF TASK PIPELINES - A system and method are disclosed for managing the execution of tasks. Each task in a first set of tasks included in a pipeline is queued for parallel execution. The execution of the tasks is monitored by a dispatching engine. When a particular task that specifies a next set of tasks in the pipeline to be executed has completed, the dispatching engine determines whether the next set of tasks can be executed before the remaining tasks in the first set of tasks have completed. When the next set of tasks can be executed before the remaining tasks have completed, the next set of tasks is queued for parallel execution. When the next set of tasks cannot be executed before the remaining tasks have completed, the next set of tasks is queued for parallel execution only after the remaining tasks have completed. | 2014-09-18 |
20140282571 | MANAGING WORKFLOW APPROVAL - A method, computer program product, and system is described. A target completion date for approval of a content item is identified. One or more approvers associated with a sequence of approval for the content item are identified. A recommended completion date for the content item is determined based upon, at least in part, historical workflow data. Whether timely completion of the approval of the content item is likely is determined based upon, at least in part, comparing the target completion date with the recommended completion date. | 2014-09-18 |
20140282572 | TASK SCHEDULING WITH PRECEDENCE RELATIONSHIPS IN MULTICORE SYSTEMS - A method for assigning tasks comprises receiving a set of tasks, modifying a deadline for each task based on execution ordering relationship of the tasks, ordering the tasks in increasing order based on the modified deadlines for the tasks, partitioning the ordered tasks using one of non-preemptive scheduling and preemptive scheduling based on a type of multicore processing environment, and assigning the partitioned tasks to one or more cores of a multicore electronic device based on results of the partitioning. | 2014-09-18 |
20140282573 | RESOLVING DEPLOYMENT CONFLICTS IN HETEROGENEOUS ENVIRONMENTS - Techniques are disclosed for managing deployment conflicts between applications executing in one or more processing environments. A first application is executed in a first processing environment and responsive to a request to execute the first application. During execution of the first application, a determination is made to redeploy the first application for execution partially in time on a second processing environment providing a higher capability than the first processing environment in terms of at least a first resource type. A deployment conflict is resolved between the first application and at least a second application. | 2014-09-18 |
20140282574 | System and Method for Implementing Constrained Data-Driven Parallelism - Systems and methods for implementing constrained data-driven parallelism may provide programmers with mechanisms for controlling the execution order and/or interleaving of tasks spawned during execution. For example, a programmer may define a task group that includes a single task, and the single task may define a direct or indirect trigger that causes another task to be spawned (e.g., in response to a modification of data specified in the trigger). Tasks spawned by a given task may be added to the same task group as the given task. A deferred keyword may control whether a spawned task is to be executed in the current execution phase or its execution is to be deferred to a subsequent execution phase for the task group. Execution of all tasks executing in the current execution phase may need to be complete before the execution of tasks in the next phase can begin. | 2014-09-18 |
20140282575 | METHOD AND APPARATUS TO AVOID DEADLOCK DURING INSTRUCTION SCHEDULING USING DYNAMIC PORT REMAPPING - A method for performing dynamic port remapping during instruction scheduling in an out of order microprocessor is disclosed. The method comprises selecting and dispatching a plurality of instructions from a plurality of select ports in a scheduler module in first clock cycle. Next, it comprises determining if a first physical register file unit has capacity to support instructions dispatched in the first clock cycle. Further, it comprises supplying a response back to logic circuitry between the plurality of select ports and a plurality of execution ports, wherein the logic circuitry is operable to re-map select ports in the scheduler module to execution ports based on the response. Finally, responsive to a determination that the first physical register file unit is full, the method comprises re-mapping at least one select port connecting with an execution unit in the first physical register file unit to a second physical register file unit. | 2014-09-18 |
20140282576 | EVENT-DRIVEN COMPUTATION - An apparatus for high-performance parallel computation, includes plural computation nodes, each having dispatch units, memories in communication with the dispatch units, and processors, each of which is in communication with the memories and the dispatch units. Each dispatch unit is configured to recognize, as ready for execution, one or more computational tasks that have become ready for execution as a result of counted remote writes into the memories. Each of the dispatch units is configured to receive a dispatch request from a processor and to determine whether there exist one or more computational tasks that are both ready and available for execution by the processor. | 2014-09-18 |
20140282577 | DURABLE PROGRAM EXECUTION - Aspects of the subject matter described herein relate to durable program execution. In aspects, a mechanism is described that allows a program to be removed from memory when the program is waiting for an asynchronous operation to complete. When a response for the asynchronous operation is received, completion data is stored in a history, the program is re-executed and the completion data in the history is used to complete the asynchronous operation. The above actions may be repeated until no more asynchronous operations in the history are pending completion. | 2014-09-18 |
20140282578 | LOCALITY AWARE WORK STEALING RUNTIME SCHEDULER - In one embodiment a processor comprises logic to determine a center of mass of a plurality of data dependencies associated with a task and assign the task to a processor in the system which is closest to the center of mass. Other embodiments may be described. | 2014-09-18 |
20140282579 | Processing Engine Implementing Job Arbitration with Ordering Status - A processing engine implementing job arbitration with ordering status is disclosed. A method of the disclosure includes receiving, by a job assigner communicably coupled to a plurality of processors, availability status from a plurality of job rings, availability status from the plurality of processors, and job entry completion status from an order manager, identifying, based on the received job entry completion status, a set of job rings from the plurality of job rings that do not exceed threshold conditions maintained by the job assigner, selecting, from the identified set of job rings, a job ring from which to pull a job entry for assignment, wherein the selecting is based on the received availability status of the plurality of job rings, and selecting, based on the received availability status of the plurality of processors, a processor to receive the assignment of the job entry for processing. | 2014-09-18 |
20140282580 | METHOD AND APPARATUS TO SAVE AND RESTORE SYSTEM MEMORY MANAGEMENT UNIT (MMU) CONTEXTS - A wireless mobile device includes a graphic processing unit (GPU) that has a system memory management unit (MMU) for saving and restoring system MMU translation contexts. The system MMU is coupled to a memory and the GPU. The system MMU includes a set of hardware resources. The hardware resources may be context banks, with each of the context banks having a set of hardware registers. The system MMU also includes a hardware controller that is configured to restore a hardware resource associated with an access stream of content issued by an execution thread of the GPU. The associated hardware resource may be restored from the memory into a physical hardware resource when the hardware resource associated with the access stream of content is not stored within one of the hardware resources. | 2014-09-18 |
20140282581 | METHOD AND APPARATUS FOR PROVIDING A COMPONENT BLOCK ARCHITECTURE - A method, apparatus and computer program product are therefore provided in order to provide a component block architecture for allocation of resources in a data center environment. In this regard, the method, apparatus, and computer program product may identify a set of block attributes for a particular block of one or more applications, and compare the attributes to the available resources of a container. The component block may be allocated to the container based on whether the resources of the container are sufficient to meet the requirements of the component block. | 2014-09-18 |
20140282582 | DETECTING DEPLOYMENT CONFLICTS IN HETEROGENOUS ENVIRONMENTS - Techniques are disclosed for managing deployment conflicts between applications executing in one or more processing environments. A first application is executed in a first processing environment and responsive to a request to execute the first application. During execution of the first application, a determination is made to redeploy the first application for execution partially in time on a second processing environment providing a higher capability than the first processing environment in terms of at least a first resource type. A deployment conflict is detected between the first application and at least a second application. | 2014-09-18 |
20140282583 | DYNAMIC MEMORY MANAGEMENT WITH THREAD LOCAL STORAGE USAGE - Methods and arrangements for dynamic memory management. Data are accepted for thread local storage, and memory usage is monitored in thread local storage. A memory block is allocated to thread local storage for storing accepted data, based on the monitored memory usage. Other variants and embodiments are broadly contemplated herein. | 2014-09-18 |
20140282584 | Allocating Accelerators to Threads in a High Performance Computing System - A method of distributing threads among accelerators in a high performance computing system receives a request to assign an accelerator in the computing system to a thread. The request includes a mode indicative of location and exclusivity of the accelerator for use by the thread. The method selects the accelerator according to a processor assigned to the thread. The method also assigns the accelerator to the thread with the exclusivity specified in the request. | 2014-09-18 |
20140282585 | Organizing File Events by Their Hierarchical Paths for Multi-Threaded Synch and Parallel Access System, Apparatus, and Method of Operation - A cloud file event server transmits file events necessary to synchronize a file system of a file share client. A tree queue director circuit receives file events and stores each one into a tree data structure which represents the hierarchical paths of files within the file share client. An event normalization circuit sorts the file events stored at each node into sequential order and moots file events which do not have to be performed because a later file event makes them inconsequential. A thread scheduling circuit assigns a resource to perform file events at a first node in a hierarchical path before assigning one or more resources to a second node which is a child of the first node until interrupted by the tree queue director circuit or until all file events in the tree data structure have been performed. | 2014-09-18 |
20140282586 | PURPOSEFUL COMPUTING - A system, method, and computer-readable storage medium configured to facilitate user purpose in a computing architecture. | 2014-09-18 |
20140282587 | MULTI-CORE BINARY TRANSLATION TASK PROCESSING - Embodiments of techniques and systems associated with binary translation (BT) in computing systems are disclosed. In some embodiments, a BT task to be processed may be identified. The BT task may be associated with a set of code and may be identified during execution of the set of code on a first processing core of the computing device. The BT task may be queued in a queue accessible to a second processing core of the computing device, the second processing core being different from the first processing core. In response to a determination that the second processing core is in an idle state or has received an instruction through an operating system to enter an idle state, at least some of the BT task may be processed using the second processing core. Other embodiments may be described and/or claimed. | 2014-09-18 |
20140282588 | SYSTEM AND SCHEDULING METHOD - A system includes a CPU; an accelerator; a comparing unit that compares a first value that is based on a first processing time period elapsing until the CPU completes a first process and a second processing time period elapsing until the accelerator completes the first process, and a second value that is based on a state of use of a battery driving the CPU and the accelerator; and a selecting unit that selects any one among the CPU and the accelerator, based on a result of comparison by the comparing unit. | 2014-09-18 |
20140282589 | QUOTA-BASED ADAPTIVE RESOURCE BALANCING IN A SCALABLE HEAP ALLOCATOR FOR MULTITHREADED APPLICATIONS - One embodiment comprises a hierarchical heap allocator system. The system comprises a system-level allocator for monitoring run-time resource usage information for an application having multiple application threads. The system further comprises a process-level allocator for dynamically balancing resources between the application threads based on the run-time resource usage information. The system further comprises multiple thread-level allocators. Each thread-level allocator facilitates resource allocation and resource deallocation for a corresponding application thread. | 2014-09-18 |
20140282590 | Compute-Centric Object Stores and Methods Of Use - Systems and methods for providing a compute-centric object store. An exemplary method may include receiving a request to perform a compute operation on at least a portion of an object store from a first user, the request identifying parameters of the compute operation, assigning virtual operating system containers to the objects of the object store from a pool of virtual operating system containers. The virtual operating system containers may perform the compute operation on the objects according to the identified parameters of the request. The method may also include clearing the virtual operating system containers and returning the virtual operating system containers to the pool. | 2014-09-18 |
20140282591 | ADAPTIVE AUTOSCALING FOR VIRTUALIZED APPLICATIONS - Virtualized applications are autoscaled by receiving performance data in time-series format from a running virtualized application, computationally analyzing the performance data to determine a pattern therein, and extending the performance data to a time in the future based at least on the determined pattern. The extended performance data is analyzed to determine if resources allocated to the virtualized application are under-utilized or over-utilized, and a schedule for re-allocating resources to the virtualized application based at least in part on a result of the analysis of the extended performance data is created. | 2014-09-18 |
20140282592 | METHOD FOR EXECUTING MULTITHREADED INSTRUCTIONS GROUPED INTO BLOCKS - A method for executing multithreaded instructions grouped into blocks. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks, wherein the instructions of the instruction blocks are interleaved with multiple threads; scheduling the instructions of the instruction block to execute in accordance with the multiple threads; and tracking execution of the multiple threads to enforce fairness in an execution pipeline. | 2014-09-18 |
20140282593 | Scheduling in a multicore architecture - The disclosure relates to scheduling threads in a multicore processor. Executable transactions may be scheduled using at least one distribution queue, which lists executable transactions in order of eligibility for execution, and multilevel scheduler which comprises a plurality of linked individual executable transaction schedulers. Each of these includes a scheduling algorithm for determining the most eligible executable transaction for execution. The most eligible executable transaction is outputted from the multilevel scheduler to the at least one distribution queue. | 2014-09-18 |
20140282594 | DISTRIBUTING PROCESSING OF ARRAY BLOCK TASKS - A technique includes distributing a plurality of tasks among a plurality of worker nodes to perform a processing operation on an array. Each task is associated with a set of a least one data block of the array, and an order of the tasks is defined by an array-based programming language. Distribution of the tasks includes, for at least one of the worker nodes, selectively reordering the order defined by the array-based programming language to regulate an amount of data transferred to the worker node. | 2014-09-18 |
20140282595 | Systems and Methods for Implementing Work Stealing Using a Configurable Separation of Stealable and Non-Stealable Work Items - A system may perform work stealing using a dynamically configurable separation between stealable and non-stealable work items. The work items may be held in a double-ended queue (deque), and the value of a variable (index) may indicate the position of the last stealable work item or the first non-stealable work item in the deque. A thread may steal a work item only from the portion of another thread's deque that holds stealable items. The owner of a deque may add work items to the deque and may modify the number or percentage of stealable work items, the number or percentage of non-stealable work items, and/or the ratio between stealable and non-stealable work items in the deque during execution. For example, the owner may convert stealable work items to non-stealable work items, or vice versa, in response to changing conditions and/or according to various work-stealing policies. | 2014-09-18 |
20140282596 | ACHIEVING CONTINUOUS AVAILABILITY FOR PLANNED WORKLOAD AND SITE SWITCHES WITH NO DATA LOSS - Embodiments of the disclosure are directed to methods, systems and computer program products for performing a planned workload switch. A method includes receiving a request to switch a site of an active workload and stopping one or more long running processes from submitting a new request to the active workload. The method also includes preventing a new network connection from accessing the active workload and processing one or more transactions in a queue of the active workload for a time period. Based on a determination that the queue of the active workload is not empty after the time period, the method includes aborting all remaining transactions in the queue of the active workload. The method further includes replicating all remaining committed units of work to a standby workload associated with the active workload. | 2014-09-18 |
20140282597 | Bottleneck Detector for Executing Applications - A bottleneck detector may analyze individual workloads processed by an application by logging times when the workload may be processed at different checkpoints in the application. For each checkpoint, a curve fitting algorithm may be applied, and the fitted curves may be compared between different checkpoints to identify bottlenecks or other poorly performing sections of the application. A real time implementation of a detection system may compare newly captured data points against historical curves to detect a shift in the curve, which may indicate a bottleneck. In some cases, the fitted curves from neighboring checkpoints may be compared to identify sections of the application that may be a bottleneck. An automated system may apply one set of checkpoints in an application, identify an area for further investigation, and apply a second set of checkpoints in the identified area. Such a system may recursively search for bottlenecks in an executing application. | 2014-09-18 |
20140282598 | METHOD AND DEVICE FOR PROCESSING A WINDOW TASK - A method and a device for processing a window task are provided. The method includes: creating a thread class including a first member variable for representing whether a task processed currently has been cancelled and a first member function for initiating a backstage thread; creating a backstage thread object based on the thread class when a task that takes time needs to be processed, and initializing the first member variable in the backstage thread object as FALSE, invoking the first member function in the backstage thread object to initiate the backstage thread; in process of the backstage thread processing the task that takes time, if a close instruction for a current window is received, setting the first member variable in the backstage thread object as TRUE to release the memory space occupied by the backstage thread object and closing the current window. | 2014-09-18 |
20140282599 | COLLECTIVELY LOADING PROGRAMS IN A MULTIPLE PROGRAM MULTIPLE DATA ENVIRONMENT - Techniques are disclosed for loading programs efficiently in a parallel computing system. In one embodiment, nodes of the parallel computing system receive a load description file which indicates, for each program of a multiple program multiple data (MPMD) job, nodes which are to load the program. The nodes determine, using collective operations, a total number of programs to load and a number of programs to load in parallel. The nodes further generate a class route for each program to be loaded in parallel, where the class route generated for a particular program includes only those nodes on which the program needs to be loaded. For each class route, a node is selected using a collective operation to be a load leader which accesses a file system to load the program associated with a class route and broadcasts the program via the class route to other nodes which require the program. | 2014-09-18 |
20140282600 | EXECUTING ALGORITHMS IN PARALLEL - Among other things, a machine-based method comprises receiving an application specification comprising one or more algorithms. Each algorithm is not necessarily suitable for concurrent execution on multiple nodes in parallel. One or more different object classes are grouped into one or more groups, each being appropriate for executing the one or more algorithms of the application specification. The executing involves data that is available in objects of the object classes. A user is enabled to code an algorithm of the one or more algorithms for one group in a single threaded environment without regard to concurrent execution of the algorithm on multiple nodes in parallel. An copy of the coded algorithm is distributed to each of the multiple nodes, without needing additional coding. The coded algorithm is caused to be executed on each node in association with at least one instance of a group independently of and in parallel to executing the other copies of the coded algorithm on the other nodes. | 2014-09-18 |
20140282601 | METHOD FOR DEPENDENCY BROADCASTING THROUGH A BLOCK ORGANIZED SOURCE VIEW DATA STRUCTURE - A method for dependency broadcasting through a block organized source view data structure. The method includes receiving an incoming instruction sequence using a global front end; grouping the instructions to form instruction blocks; using a plurality of register templates to track instruction destinations and instruction sources by populating the register template with block numbers corresponding to the instruction blocks, wherein the block numbers corresponding to the instruction blocks indicate interdependencies among the blocks of instructions; populating a block organized source view data structure, wherein the source view data structure stores sources corresponding to the instruction blocks as recorded by the plurality of register templates; upon dispatch of one block of the instruction blocks, broadcasting a number belonging to the one block to a column of the source view data structure that relates that block and marking the column accordingly; and updating the dependency information of remaining instruction blocks in accordance with the broadcast. | 2014-09-18 |
20140282602 | GENERIC WAIT SERVICE: PAUSING A BPEL PROCESS - A method of pausing a plurality of service-oriented application (SOA) instances may include receiving, from an instance of an SOA entering a pause state, an initiation message. The initiation message may include an exit criterion that identifies a business condition that must be satisfied before the instance of the SOA exits the pause state. The method may also include receiving a notification from an event producer, the notification comprising a status of a business event and determining whether the status of the business event satisfies the business condition of the exit criterion. The method may additionally include sending, in response to a determination that the status of the business event satisfies the business condition of the exit criterion, an indication to the instance of the SOA that the business condition has been satisfied such that the instance of the SOA can exit the pause state. | 2014-09-18 |
20140282603 | METHOD AND APPARATUS FOR DETECTING A COLLISION BETWEEN MULTIPLE THREADS OF EXECUTION FOR ACCESSING A MEMORY ARRAY - A method includes determining, for a first thread of execution, a first speculative decoded operands signal and determining, for a second thread of execution, a second speculative decoded operands signal. The method further includes determining, for the first thread of execution, a first constant and determining, for the second thread of execution, a second constant. The method further compares the first speculative decoded operands signal to the second speculative decoded operands signal and uses the first and second constant to detect a wordline collision for accessing the memory array. | 2014-09-18 |
20140282604 | QUALIFIED CHECKPOINTING OF DATA FLOWS IN A PROCESSING ENVIRONMENT - Techniques are disclosed for qualified checkpointing of a data flow model having data flow operators and links connecting the data flow operators. A link of the data flow model is selected based on a set of checkpoint criteria. A checkpoint is generated for the selected link. The checkpoint is selected from different checkpoint types. The generated checkpoint is assigned to the selected link. The data flow model, having at least one link with no assigned checkpoint, is executed. | 2014-09-18 |