Patent application number | Description | Published |
20080239960 | PATH-BASED ADAPTIVE PRIORITIZATION AND LATENCY MANAGEMENT - An improved solution for managing messages through a request response protocol network utilizing a path-based adaptive prioritization and latency management is provided. In an embodiment of the invention, a method of managing a message being conveyed through a request response protocol network via a path includes: receiving the message; determining for the message at least one of: an incoming portion of the path or an outgoing portion of the path; and adjusting a priority of the message based on a latency target for the determined portion of the path. | 10-02-2008 |
20080307183 | AUTOMATIC MEMORY MANAGEMENT (AMM) - The present invention manages the execution of multiple AMM cycles to reduce or eliminate any overlap. Specifically, the present invention provides an external supervisory process to monitor the AMM behavior of VMs on one or more nodes, and intervene when coincident AMM activity appears to be imminent. If AMM patterns suggest that two VMs are likely to perform a (e.g., a major) AMM cycle simultaneously (or with significant overlap) in the near future, the supervisory process can trigger one of the VMs to AMM immediately, or at the first ‘safe’ interval prior to the predicted AMM collision. This will have the effect of desynchronizing the AMM behavior of the VMs and maintaining AMM latency for both VMs within the expected bounds for their independent operation, without any inter-VM effects. | 12-11-2008 |
20090055615 | MEMORY TUNING FOR GARBAGE COLLECTION AND CENTRAL PROCESSING UNIT (CPU) UTILIZATION OPTIMIZATION - A method, system and computer program product for garbage collection sensitive load balancing is disclosed. The method for memory tuning for garbage collection and CPU utilization optimization can include benchmarking an application across multiple different heap sizes to accumulate garbage collection metrics and utilizing the garbage collection metrics accumulated during benchmarking to compute both CPU utilization and garbage collection time for each of a selection of candidate heap sizes. One of the candidate heap sizes can be matched to a desired CPU utilization and garbage collection time, and the matched one of the candidate heap sizes can be applied to a host environment. | 02-26-2009 |
20090063594 | COMPUTER SYSTEM MEMORY MANAGEMENT - The number of CPU cycles required to reclaim object memory space in a memory management process is reduced by using a two phase approach. A data structure exists for each object that is to be loaded into object memory space. One part of the data structure is the object definition. The other part is a MM (Memory Management) immunity annotation or value that controls the frequency with which the object must actually be examined to determine if it is suitable for reclamation. On each iteration of the memory management process, the object's MM immunity value is tested to determine whether it is greater than a predetermined threshold. If greater than the threshold, the value is decremented, but the object is not actually examined for its suitability for removal. If the value equals the threshold, the object itself is examined. If it is found to be suitable, it is removed to reclaim the object memory space it previously occupied, If it is actually examined but is found not to be suitable for removal, the MM immunity value is reset to its original value or is otherwise adjusted to prevent examination of the object for a certain number of future iterations of the memory management process. | 03-05-2009 |
20090112952 | LOW LATENCY OPTIMIZATION FOR GENERATIONAL GARBAGE COLLECTION - A solution for handling objects in a nursery heap that includes a garbage collector monitoring engine, a size adjustor program, and/or a promotion program. The garbage collector monitoring engine can monitor occurrences of global garbage collection events performed by a global garbage collector program as well as occurrences of nursery garbage collection events performed by a nursery garbage collector. The size adjustor program can dynamically adjust a size of a nursery heap based upon programmatically deterministic events detected by the garbage collector monitoring engine. The promotion program can dynamically adjust conditions of promotion for nursery objects, wherein when additional space is needed in the nursery heap to reduce nursery garbage collection induced latency, the promotion program changes promotion criteria to ensure objects are promoted more frequently from the nursery heap. | 04-30-2009 |
20090122704 | Limiting Extreme Loads At Session Servers - A method, system and computer program product for limiting extreme loads and reducing fluctuations in load at session servers. An admission rate controller of a SIP router calculates the “deflator ratio” equal to the average number of in-dialog messages received over a first fixed interval of time divided by the average number of out-of-dialog messages received over a second fixed interval of time. Further, the admission rate controller calculates the “dampening ratio” equal to the maximum number of messages allowed over a period of time divided by the number of messages admitted over a previous time interval. When an overload condition has been detected, the admission rate controller calculates the maximum number of out-of-dialog messages to be sent to its associated SIP server based on the deflator and dampening ratios. In this manner, a smoother transition from the overload condition to the non-overload condition may occur. | 05-14-2009 |
20090122705 | Managing Bursts of Traffic In Such a Manner as to Improve The Effective Utilization of Session Servers - A method, system and computer program product for managing bursts of traffic. A counter, referred to herein as a “frequency counter,” is incremented during those time intervals an overload condition is detected and is decremented during those time intervals an overload condition is not detected. An overload condition may refer to when the number of out-of-dialog messages exceeds a threshold value corresponding to the maximum number of out-of-dialog messages that should be accepted and forwarded to an associated session server. If the count of the frequency counter exceeds some pre-configured value, then traffic that exceeds the threshold for the overload condition is stopped from being sent to the associated session server. Otherwise, traffic that exceeds the threshold for the overload condition is permitted to be sent to the associated session server. By managing bursts of traffic in such a manner, the effective utilization of session servers is improved. | 05-14-2009 |
20090138237 | Run-Time Characterization of On-Demand Analytical Model Accuracy - A method of determining accuracy of predicted system behavior can include creating a plurality of noise adjusted analytical models, wherein each noise adjusted analytical model is associated with a set of predefined analytical model parameters. A set of inferred analytical model parameters for each noise adjusted analytical model can be derived. Each set of inferred analytical model parameters can depend upon a current noise adjusted analytical model and each prior noise adjusted analytical model. For each set of inferred analytical model parameters, a measure of error between the set of inferred analytical model parameters and the set of predefined analytical model parameters associated with the noise adjusted analytical model from which the set of inferred analytical model parameters was derived can be determined. | 05-28-2009 |
20100088412 | CAPACITY SIZING A SIP APPLICATION SERVER BASED ON MEMORY AND CPU CONSIDERATIONS - A SIP workload can be defined. A number of nodes of a SIP application server needed to handle the SIP workload can be determined based upon memory considerations. A number of nodes of the SIP application server needed to handle the SIP workload can be determined base upon CPU considerations. The SIP application server can be capacity sized based upon a greater of the determined number of nodes based upon memory consideration and the determined number of nodes based upon CPU considerations. | 04-08-2010 |
20120017204 | STRING CACHE FILE FOR OPTIMIZING MEMORY USAGE IN A JAVA VIRTUAL MACHINE - A method, system and computer program product for optimizing memory usage associated with duplicate string objects in a Java virtual machine. The method comprises scanning a heap of the Java virtual machine at the end of the start-up process of the virtual machine to identify duplicate strings associated with the virtual machine, storing the identified strings in a string cache file, and determining whether a new string that needs to be created during start-up already exists in the string cache file. The duplicate strings are added to an interned strings table. A reference to a duplicate string is returned if a string to be created is already in the string cache file. | 01-19-2012 |
20120147779 | PATH-BASED ADAPTIVE PRIORITIZATION AND LATENCY MANAGEMENT - An improved solution for managing messages through a request response protocol network utilizing a path-based adaptive prioritization and latency management is provided. A weight for a message is determined at a message management computing device based upon a number of hops and a latency of networks passed through by the message. A hop latency target for a current hop segment is evaluated relative to an overall latency target and the determined weight for the message. A priority of the message is adjusted in response to determining that the overall latency target, relative to the weight for the message and the hop latency target for the current hop segment, exceeds a configured allowable hop latency deviation for the current hop segment. | 06-14-2012 |
20120233609 | OPTIMIZING VIRTUAL MACHINE SYNCHRONIZATION FOR APPLICATION SOFTWARE - Real-time application metrics of an application executed by a virtual machine are dynamically monitored by a controlling agent and analyzed to determine an optimal configuration of the virtual machine for executing the application. Based on the measured metrics, tunable parameters of the virtual machine may be adjusted to achieve desired application performance. | 09-13-2012 |
20130219379 | OPTIMIZATION OF AN APPLICATION TO REDUCE LOCAL MEMORY USAGE - A method of optimizing an application to reduce local memory usage. The method can include instrumenting at least one executable class file of the application with analysis code, the executable class file including bytecode. The method also can include executing the class file on a virtual machine, wherein during execution the analysis code generates data related to the application's use of local memory. The method further can include, via a processor, analyzing the data related to the application's use of the local memory to generate a memory profile analysis. The method further can include, based on the memory profile analysis, automatically revising at least one portion of the bytecode to reduce an amount of the local memory used by the application. | 08-22-2013 |
20130268921 | OPTIMIZATION OF AN APPLICATION TO REDUCE LOCAL MEMORY USAGE - Optimizing an application to reduce local memory usage. At least one executable class file of the application can be instrumented with analysis code, the executable class file including bytecode. The class file can be executed on a virtual machine, wherein during execution the analysis code generates data related to the application's use of local memory. The data related to the application's use of the local memory can be analyzed to generate a memory profile analysis. Based on the memory profile analysis, at least one portion of the bytecode can be automatically revised to reduce an amount of the local memory used by the application. | 10-10-2013 |
20140101658 | OPTIMIZING VIRTUAL MACHINE SYNCHRONIZATION FOR APPLICATION SOFTWARE - Real-time application metrics of an application executed by a virtual machine are dynamically monitored by a controlling agent and analyzed to determine an optimal configuration of the virtual machine for executing the application. Based on the measured metrics, tunable parameters of the virtual machine may be adjusted to achieve desired application performance. | 04-10-2014 |
20140115585 | STRING CACHE FILE FOR OPTIMIZING MEMORY USAGE IN A JAVA VIRTUAL MACHINE - A method, system and computer program product for optimizing memory usage associated with duplicate string objects in a Java virtual machine. The method comprises scanning a heap of the Java virtual machine at the end of the start-up process of the virtual machine to identify duplicate strings associated with the virtual machine, storing the identified strings in a string cache file, and determining whether a new string that needs to be created during start-up already exists in the string cache file. The duplicate strings are added to an interned strings table. A reference to a duplicate string is returned if a string to be created is already in the string cache file. | 04-24-2014 |
20140165074 | SOFT CO-PROCESSORS TO PROVIDE A SOFTWARE SERVICE FUNCTION OFF-LOAD ARCHITECTURE IN A MULTI-CORE ARCHITECTURE - A method of distributing functions among a plurality of cores in a multi-core processing environment can include organizing cores of the multi-core processing environment into a plurality of different service pools. Each of the plurality of service pools can be associated with at least one function and have at least one core executing at least one soft co-processor that performs the associated function. The method further can include, responsive to a request from a primary processor to offload a selected function, selecting an available soft co-processor from a service pool associated with the selected function and assigning the selected function to the selected soft co-processor. The method also can include marking the selected soft co-processor as busy and, responsive to receiving an indication from the soft co-processor that processing of the selected function has completed, marking the selected soft co-processor as available. | 06-12-2014 |
20140222953 | Reliable and Scalable Image Transfer For Data Centers With Low Connectivity Using Redundancy Detection - A system and method for efficiently transferring virtual machine images across nodes in a cloud computing environment, includes analyzing each image on each node to create hash code clusters and a similarity matrix. An instruction to transfer an image from a source node to a target node is received. The clusters and the similarity matrix are used to determine to what extent the data from the image is already on the source node, or on any other node, and further determines the cost and speed of transferring such data to the target node. An optimal transfer plan is generated, and data that is not already on the target node is transferred to the target node from the most efficient node on which it is available, according to the optimal transfer plan. | 08-07-2014 |