Patent application number | Description | Published |
20080239963 | BYPASSING ROUTING STACKS USING MOBILE INTERNET PROTOCOL - Methods, systems and computer program products for load balancing using Mobile Internet Protocol (IP) Version 6 are provided. A request for a connection is received from a client at a routing stack. A Mobile IP Version 6 Binding Update message is transmitted from the routing stack to the client responsive to the received request. The Binding Update message identifies a selected target stack so as to allow the client to communicate directly with the target stack bypassing the routing stack. | 10-02-2008 |
20090170490 | BINDING CACHE SUPPORT IN A LOAD BALANCED SYSPLEX - Embodiments of the present invention provide a method, system and computer program product for Mobile IPv6 binding cache support for a load balanced sysplex. In one embodiment of the invention, a load balancing sysplex can be configured for mobile device binding cache support. The sysplex can include a distributor coupled to different targets in a load balancing arrangement, where each of the targets can support a correspondent node enabled to communicate with a mobile device. A master binding cache can be coupled to the distributor and a binding cache manager can be coupled to the distributor. Notably, the binding cache manager can perform return routability with the mobile device and can provide a corresponding entry in the master binding cache for use by a target supporting a correspondent node for the mobile device. In one aspect of the embodiment, a replica of the master binding cache can be provided in each of the targets for use by supported correspondent nodes in communicating with different mobile devices associated with binding cache entries in the replica. | 07-02-2009 |
20090193251 | SECURE REQUEST HANDLING USING A KERNEL LEVEL CACHE - The present invention discloses a system, method, apparatus, and computer usable product code for handling requests. The invention can include a kernel level cache, a request handling service, and a transport layer security service. The kernel level cache can store request handling data. The request handling service can handle secure requests at a transport layer of a kernel when request handling data is present in the kernel level cache. The transport layer security service can handle encryption/decryption operations for the secure requests and request responses at the transport layer. | 07-30-2009 |
20090271521 | METHOD AND SYSTEM FOR PROVIDING END-TO-END CONTENT-BASED LOAD BALANCING - Methods and systems for providing end-to-end content-based load balancing are described. A Transmission Control Protocol (TCP) connection is accepted from a client and a request is received from the client. The request is processed, a target stack is selected, and the TCP connection is transferred to the selected target stack such that the client and selected target stack maintain an end-to-end TCP connection. In an exemplary embodiment, the request can be processed in a TCP kernel. In another preferred embodiment, the TCP connection can include TCP data packets and the request can include request data packets. The TCP connection transfer can be performed by replaying the TCP data packets and the request data packets to the selected target stack. | 10-29-2009 |
20090271613 | METHOD AND SYSTEM FOR PROVIDING NON-PROXY TLS/SSL SUPPORT IN A CONTENT-BASED LOAD BALANCER - Methods and systems for providing non-proxy Secure Sockets Layer and Transport Layer Security (SSL/TLS) support in a content-based load balancer are described. A Transmission Control Protocol (TCP) connection is accepted from a client, and an SSL/TLS connection is established with the client such that random data used in key generation is created. A request is received from the client, and the request is decrypted. The request is processed, a target stack is selected, and the TCP connection, the SSL/TLS connection, and the random data are transferred to the selected target stack such that the client and selected target stack maintain an end-to-end TCP connection with a non-proxy SSL/TLS connection. | 10-29-2009 |
20110106974 | BYPASSING ROUTING STACKS USING MOBILE INTERNET PROTOCOL - Methods systems and computer program products for load balancing using Mobile Internet Protocol (IP) Version 6 are provided. A request for a connection is received from a client at a routing stack. A Mobile IP Version 6 Binding Update message is transmitted from the routing stack to the client responsive to the received request. The Binding Update message identifies a selected target stack so as to allow the client to communicate directly with the target stack bypassing the routing stack | 05-05-2011 |
20110219442 | Policy-Based Security Certificate Filtering - Policy filtering services are built into security processing of an execution environment for resolving how to handle a digital security certificate of a communicating entity without requiring a local copy of a root certificate that is associated with the entity through a certificate authority (“CA”) chain. Policy may be specified using a set of rules (or other policy format) indicating conditions for certificate filtering. This filtering is preferably invoked during handshaking, upon determining that a needed root CA certificate is not available. In one approach, the policy uses rules specifying conditions under which a certificate is permitted (i.e., treated as if it is validated) and other rules specifying conditions under which a certificate is blocked (i.e., treated as if it is invalid). Preferably, policy rules are evaluated and enforced in order of most-specific to least-specific. | 09-08-2011 |
20120201142 | Data Packet Interception System - A method and apparatus for managing data packets in a network data processing system. The data processing system monitors for the data packets on the network data processing system. The data processing system sends a response data packet to a source endpoint that sent a request data packet in response to detecting the request data packet in the data packets requesting a first identifier for a first device at a target endpoint in the network data processing system. A response data packet has a selected identifier for a selected device in the monitoring data processing system. The data processing system processes a set of data packets in response to detecting the set of data packets having the selected identifier. The data processing system sends the set of data packets to the target endpoint with the first identifier for the first device in place of the selected identifier. | 08-09-2012 |
20130191527 | DYNAMICALLY BUILDING A SET OF COMPUTE NODES TO HOST THE USER'S WORKLOAD - A method, system and computer program product for dynamically building a set of compute nodes to host a user's workload. An administrative server receives workload definitions that include the types of workloads that are to be run in a cloud group as well as a number of instances of each workload the cloud group should support. These workload definitions are used to determine the virtual machine demands that the cloud group will place on the cloud environment. The administrative server further receives the demand constraints, placement constraints and license enforcement policies. The administrative server identifies a set of compute nodes to host the user's workload based on the virtual machines demands, the demand constraints, the placement constraints and the license enforcement policies. In this manner, a set of compute nodes is dynamically built for consideration in forming a cloud group without the user requiring knowledge of the cloud's composition. | 07-25-2013 |
20130191543 | PERFORMING MAINTENANCE OPERATIONS ON CLOUD COMPUTING NODE WITHOUT REQUIRING TO STOP ALL VIRTUAL MACHINES IN THE NODE - A method, system and computer program product for performing maintenance operations on a cloud computing node. An administrative server receives an indication that a maintenance operation is to be performed on a cloud computing node. The administrative server identifies which virtual machine(s) on the cloud computing node will be affected by the maintenance operation. The administrative server relocates the virtual machine(s) to be affected by the maintenance operation to other suitable cloud computing node(s) prior to the maintenance operation being performed. The administrative server then performs the maintenance operation on the cloud computing node. The virtual machine(s) may be relocated back to the cloud computing node after the maintenance operation is completed in response to a need to rebalance resources in such a manner. In this manner, maintenance operations may be performed on a cloud computing node without requiring to stop all the virtual machines in the node. | 07-25-2013 |
20130204960 | ALLOCATION AND BALANCING OF STORAGE RESOURCES - A method and technique for allocation and balancing of storage resources includes: determining, for each of a plurality of storage controllers, an input/output (I/O) latency value based on an I/O latency associated with each storage volume controlled by a respective storage controller; determining network bandwidth utilization and network latency values corresponding to each storage controller; responsive to receiving a request to allocate a new storage volume, selecting a storage controller having a desired I/O latency value; determining whether the network bandwidth utilization and network latency values for the selected storage controller are below respective network bandwidth utilization and network latency value thresholds; and responsive to determining that the network bandwidth utilization and network latency values for the selected storage controller are below the respective thresholds, allocating the new storage volume to the selected storage controller. | 08-08-2013 |
20130205005 | ALLOCATION AND BALANCING OF STORAGE RESOURCES - A system and technique for allocating and balancing storage resources includes: a plurality of storage controllers each controlling one or more storage volumes, and a processor unit operable to execute a management application to: determine, for each controller, an input/output (I/O) latency value based on an I/O latency associated with each storage volume controlled by a respective controller; determine network bandwidth utilization and network latency values corresponding to each controller; responsive to a request to allocate a new storage volume, select a controller having a desired I/O latency value; determine whether the network bandwidth utilization and network latency values for the selected controller are below respective network bandwidth utilization and network latency value thresholds; and responsive to determining that the network bandwidth utilization and network latency values for the selected controller are below respective thresholds, allocate the new storage volume to the selected controller. | 08-08-2013 |
20130227131 | DYNAMICALLY BUILDING A SET OF COMPUTE NODES TO HOST THE USER'S WORKLOAD - A method, system and computer program product for dynamically building a set of compute nodes to host a user's workload. An administrative server receives workload definitions that include the types of workloads that are to be run in a cloud group as well as a number of instances of each workload the cloud group should support. These workload definitions are used to determine the virtual machine demands that the cloud group will place on the cloud environment. The administrative server further receives the demand constraints, placement constraints and license enforcement policies. The administrative server identifies a set of compute nodes to host the user's workload based on the virtual machines demands, the demand constraints, the placement constraints and the license enforcement policies. In this manner, a set of compute nodes is dynamically built for consideration in forming a cloud group without the user requiring knowledge of the cloud's composition. | 08-29-2013 |
20130232268 | PERFORMING MAINTENANCE OPERATIONS ON CLOUD COMPUTING NODE WITHOUT REQUIRING TO STOP ALL VIRTUAL MACHINES IN THE NODE - A method, system and computer program product for performing maintenance operations on a cloud computing node. An administrative server receives an indication that a maintenance operation is to be performed on a cloud computing node. The administrative server identifies which virtual machine(s) on the cloud computing node will be affected by the maintenance operation. The administrative server relocates the virtual machine(s) to be affected by the maintenance operation to other suitable cloud computing node(s) prior to the maintenance operation being performed. The administrative server then performs the maintenance operation on the cloud computing node. The virtual machine(s) may be relocated back to the cloud computing node after the maintenance operation is completed in response to a need to rebalance resources in such a manner. In this manner, maintenance operations may be performed on a cloud computing node without requiring to stop all the virtual machines in the node. | 09-05-2013 |
20140068600 | PROVIDING A SEAMLESS TRANSITION FOR RESIZING VIRTUAL MACHINES FROM A DEVELOPMENT ENVIRONMENT TO A PRODUCTION ENVIRONMENT - A method, system and computer program product for providing a seamless transition for resizing virtual machines from a development environment to a production environment. An administrative server receives an instruction from a customer to resize a virtual machine running on a cloud computing node, where the resized virtual machine requires physical resources (e.g., twenty physical processor cores) to be utilized in the production environment. Instead of the administrative server utilizing the same number of physical resources in the development environment that need to be utilized in the production environment, the administrative server utilizes a fewer number of physical resources by also utilizing virtual resources (e.g., twenty virtual processor cores and only two physical processor cores) so as to provide a development environment with the same resource capacity as the production environment but with fewer physical resources thereby more efficiently utilizing the physical resources on the cloud computing node. | 03-06-2014 |
20140157038 | USING SEPARATE PROCESSES TO HANDLE SHORT-LIVED AND LONG-LIVED JOBS TO REDUCE FAILURE OF PROCESSES - A method, system and computer program product for reducing the failure of processes. After a job is received, a determination is made as whether the received job is a “short-lived job” or a “long-lived job.” A short-lived job refers to a job who accomplishes a given task in less than a threshold period of time. A long-lived job refers to a job who accomplishes a given task in greater than a threshold period of time. For an identified long-lived job, the long-lived job is executed on a single process apart from other processes; whereas, the short-lived job is executed on at least one process separate from the processes executing long-lived jobs. As a result of executing the long-lived jobs on separate processes from the short-lived jobs, the likelihood of having a process fail is lessened since the duration of time that the process is running will be lessened. | 06-05-2014 |
20140201365 | IMPLEMENTING A PRIVATE NETWORK ISOLATED FROM A USER NETWORK FOR VIRTUAL MACHINE DEPLOYMENT AND MIGRATION AND FOR MONITORING AND MANAGING THE CLOUD ENVIRONMENT - A method, system and computer program product for optimizing quality of service settings for virtual machine deployment and migration. A first network (e.g., user network) is provided that is dedicated to running user workloads deployed on virtual machines. A second network (e.g., cloud management network), isolated from the first network, is also provided that is dedicated to virtual machine deployment and migration. As a result of the first and second networks not being shared, the administrative server utilizes unique quality of service settings for virtual machine deployment and migration supported by the second network that would otherwise not be possible if the first and second networks were shared. | 07-17-2014 |
20140223222 | INTELLIGENTLY RESPONDING TO HARDWARE FAILURES SO AS TO OPTIMIZE SYSTEM PERFORMANCE - A method, system and computer program product for intelligently responding to hardware failures so as to optimize system performance. An administrative server monitors the utilization of the hardware as well as the software components running on the hardware to assess a context of the software components running on the hardware. Upon detecting a hardware failure, the administrative server analyzes the hardware failure to determine the type of hardware failure and analyzes the properties of the workload running on the failed hardware. The administrative server then responds to the detected hardware failure based on various factors, including the type of the hardware failure, the properties of the workload running on the failed hardware and the context of the software running on the failed hardware. In this manner, by taking into consideration such factors in responding to the detected hardware failure, a more intelligent response is provided that optimizes system performance. | 08-07-2014 |
20140223241 | INTELLIGENTLY RESPONDING TO HARDWARE FAILURES SO AS TO OPTIMIZE SYSTEM PERFORMANCE - A method, system and computer program product for intelligently responding to hardware failures so as to optimize system performance. An administrative server monitors the utilization of the hardware as well as the software components running on the hardware to assess a context of the software components running on the hardware. Upon detecting a hardware failure, the administrative server analyzes the hardware failure to determine the type of hardware failure and analyzes the properties of the workload running on the failed hardware. The administrative server then responds to the detected hardware failure based on various factors, including the type of the hardware failure, the properties of the workload running on the failed hardware and the context of the software running on the failed hardware. In this manner, by taking into consideration such factors in responding to the detected hardware failure, a more intelligent response is provided that optimizes system performance. | 08-07-2014 |
20140223443 | DETERMINING A RELATIVE PRIORITY FOR A JOB USING CONTEXT AND ENVIRONMENTAL CONSIDERATIONS - A method, system and computer program product for determining a relative priority for a job. A “policy” is selected based on the job itself and the reason that the job is being executed, where the policy includes a priority range for the job and for an application. A priority for the job that is within the priority range of the job as established by the selected policy is determined based on environmental and context considerations. This job priority is then adjusted based on the priority of the application (within the priority range as established by the policy) becoming the job's final priority. By formulating a priority that more accurately reflects the true priority or importance of the job by taking into consideration the environmental and context considerations, job managers will now be able to process these jobs in a more efficient manner. | 08-07-2014 |
20140223521 | ALLOWING ACCESS TO UNDERLYING HARDWARE CONSOLES TO CORRECT PROBLEMS EXPERIENCING BY USER - A method, system and computer program product for providing access to underlying hardware consoles to correct problems experiencing by a user. The administrative server receives a request from the user to access a managing system configured to provide access to the underlying hardware consoles that are combined together to service a user's computing requirements. The administrative server presents a list of managing systems for the user to connect that were identified as being able to address the problem(s) the user is experiencing. The administrative server then enables access to managing systems selected in the list in response to the user providing appropriate authentication credentials. An interface is then provided to the user by the selected managing systems to select the underlying hardware consoles to access. In this manner, the user is provided access to the underlying hardware consoles in an easy manner without presenting numerous options and configurations. | 08-07-2014 |
20140298487 | MULTI-USER UNIVERSAL SERIAL BUS (USB) KEY WITH CUSTOMIZABLE FILE SHARING PERMISSIONS - A method, data storage device and computer program product for having multiple users share a single data storage device securely. A data storage device, such as a Universal Serial Bus (USB) key, is plugged into a computing device. A USB controller of the USB key recognizes the computing device and creates an account for the user. The created account is associated with the user as well as associated with the computing device. Data uploaded to the USB key by the user is then associated with the created account. Only that user will be able to view that data on his/her computing device (computing device associated with the created account) unless the user indicates to share that data with other users. Such a process may be repeated each time the USB key is plugged into a different computing device thereby creating multiple accounts associated with multiple computing devices and users. | 10-02-2014 |
20140298489 | MULTI-USER UNIVERSAL SERIAL BUS (USB) KEY WITH CUSTOMIZABLE FILE SHARING PERMISSIONS - A method, data storage device and computer program product for having multiple users share a single data storage device securely. A data storage device, such as a Universal Serial Bus (USB) key, is plugged into a computing device. A USB controller of the USB key recognizes the computing device and creates an account for the user. The created account is associated with the user as well as associated with the computing device. Data uploaded to the USB key by the user is then associated with the created account. Only that user will be able to view that data on his/her computing device (computing device associated with the created account) unless the user indicates to share that data with other users. Such a process may be repeated each time the USB key is plugged into a different computing device thereby creating multiple accounts associated with multiple computing devices and users. | 10-02-2014 |
20140304437 | ALLOCATION AND BALANCING OF STORAGE RESOURCES - A method and technique for allocation and balancing of storage resources includes: determining, for each of a plurality of storage controllers, an input/output (I/O) latency value based on an I/O latency associated with each storage volume controlled by a respective storage controller; determining network bandwidth utilization and network latency values corresponding to each storage controller; responsive to receiving a request to allocate a new storage volume, selecting a storage controller having a desired I/O latency value; determining whether the network bandwidth utilization and network latency values for the selected storage controller are below respective network bandwidth utilization and network latency value thresholds; and responsive to determining that the network bandwidth utilization and network latency values for the selected storage controller are below the respective thresholds, allocating the new storage volume to the selected storage controller. | 10-09-2014 |
20140372497 | DETERMINING LOCATION OF HARDWARE COMPONENTS IN A CLOUD COMPUTING ENVIRONMENT BASED ON HARDWARE COMPONENTS SELF-LOCATING OTHER HARDWARE COMPONENTS - A method, system and computer program product for managing hardware components in a cloud computing environment. Each hardware component in a data center of the cloud computing environment detects and identifies other hardware components within a communication range of the hardware component using a wireless protocol. Furthermore, each hardware component determines its actual location as well as its relative location with respect to the detected hardware components, such as based on a triangulation of the wireless signals. Such information is transmitted to an administrative server. An inventory of the hardware components in the data center, including their current location, is then compiled by the administrative server. In this manner, a hardware component can be more easily located after being relocated in the data center. Furthermore, the administrative server will be able to balance a workload across these hardware components based on their location. | 12-18-2014 |
20140372595 | DETERMINING LOCATION OF HARDWARE COMPONENTS IN A CLOUD COMPUTING ENVIRONMENT BASED ON HARDWARE COMPONENTS SELF-LOCATING OTHER HARDWARE COMPONENTS - A method, system and computer program product for managing hardware components in a cloud computing environment. Each hardware component in a data center of the cloud computing environment detects and identifies other hardware components within a communication range of the hardware component using a wireless protocol. Furthermore, each hardware component determines its actual location as well as its relative location with respect to the detected hardware components, such as based on a triangulation of the wireless signals. Such information is transmitted to an administrative server. An inventory of the hardware components in the data center, including their current location, is then compiled by the administrative server. In this manner, a hardware component can be more easily located after being relocated in the data center. Furthermore, the administrative server will be able to balance a workload across these hardware components based on their location. | 12-18-2014 |