Patent application number | Description | Published |
20080215749 | PROVIDING DIFFERENT RATES TO DIFFERENT USERS OF A DOWNLOAD SERVICE - A system, computer program and method for transmitting requested data from a data source in response to data transmission requests from at least one electronic device according to differential rates of throughput. Data transmission requests are classified into one of a plurality of throughput classes, with each throughput class having an assigned rate of throughput. A proportion of data transmission requests to be processed from each throughput class is selected such that each data transmission request has a rate of throughput approximating the assigned rate of throughput of its class. The requested data is then sent from the data source to the electronic device. | 09-04-2008 |
20080239960 | PATH-BASED ADAPTIVE PRIORITIZATION AND LATENCY MANAGEMENT - An improved solution for managing messages through a request response protocol network utilizing a path-based adaptive prioritization and latency management is provided. In an embodiment of the invention, a method of managing a message being conveyed through a request response protocol network via a path includes: receiving the message; determining for the message at least one of: an incoming portion of the path or an outgoing portion of the path; and adjusting a priority of the message based on a latency target for the determined portion of the path. | 10-02-2008 |
20090157855 | DECENTRALIZED APPLICATION PLACEMENT FOR WEB APPLICATION MIDDLEWARE - A decentralized process to ensure the dynamic placement of applications on servers under two types of simultaneous resource requirements, those that are dependent on the loads placed on the applications and those that are independent. The demand (load) for applications changes over time and the goal is to satisfy all the demand while changing the solution (assignment of applications to servers) as little as possible. | 06-18-2009 |
20090307393 | INBOUND MESSAGE RATE LIMIT BASED ON MAXIMUM QUEUE TIMES - A system for managing inbound messages in a server complex including one or more message consumers. The system includes a server configured to receive the inbound messages from a first peripheral device and to transmit messages to one or more of the plurality of message consumers. The system also includes an inbound message queue coupled to the server, the inbound message queue configured to store inbound message until an age of any message stored on the inbound message queue exceeds a predetermined threshold. | 12-10-2009 |
20100008377 | QUEUE MANAGEMENT BASED ON MESSAGE AGE - A system for managing inbound messages in a server complex including one or more message consumers. The system includes a server configured to receive the inbound messages from a first peripheral device and to transmit messages to one or more of the plurality of message consumers. The system also includes an inbound message queue coupled to the server, the inbound message queue configured to store inbound message and discard at least one message when an age of the message exceeds an expiration time. | 01-14-2010 |
20110173245 | DISTRIBUTION OF INTERMEDIATE DATA IN A MULTISTAGE COMPUTER APPLICATION - A method, system and computer program product for distributing intermediate data of a multistage computer application to a plurality of computers. In one embodiment, a data manager calculates data usage demand of generated intermediate data. A computer manager calculates a computer usage, which is the sum of all data usage demand of each stored intermediate data at the computer. A scheduler selects a target computer from the plurality of computers for storage of the generated intermediate data at such that a variance of the computer usage demand across the plurality of computers is minimized. | 07-14-2011 |
20110173410 | EXECUTION OF DATAFLOW JOBS - A method, system and computer program product for storing data in memory. An example system includes at least one multistage application configured to generate intermediate data in a generating stage of the application and consume the intermediate data in a subsequent consuming stage of the application. A runtime profiler is configured to monitor the application's execution and dynamically allocate memory to the application from an in-memory data grid. | 07-14-2011 |
20120137290 | MANAGING MEMORY OVERLOAD OF JAVA VIRTUAL MACHINES IN WEB APPLICATION SERVER SYSTEMS - The invention relates to memory overload management for Java virtual machines (JVMs) in Web application sever systems. Disclosed is a method and system of memory overload management for a Web application sever system, wherein the Web application sever system comprises multiple JVMs, the method comprising: determining one or more replica shards for which replacement shall be performed; determining one or more target JVMs for storing a corresponding replica shard set including at least one replica shard from the one or more replica shards; and for each target JVM, performing the following: judging whether the free memory of the target JVM is adequate for storing the corresponding replica shard set; if the judging result is negative, performing the following: causing the target JVM to suspend the creation of session until the free memory of the target JVM becomes adequate for storing the corresponding replica shard set. | 05-31-2012 |
20120147779 | PATH-BASED ADAPTIVE PRIORITIZATION AND LATENCY MANAGEMENT - An improved solution for managing messages through a request response protocol network utilizing a path-based adaptive prioritization and latency management is provided. A weight for a message is determined at a message management computing device based upon a number of hops and a latency of networks passed through by the message. A hop latency target for a current hop segment is evaluated relative to an overall latency target and the determined weight for the message. A priority of the message is adjusted in response to determining that the overall latency target, relative to the weight for the message and the hop latency target for the current hop segment, exceeds a configured allowable hop latency deviation for the current hop segment. | 06-14-2012 |
20120297145 | SYSTEM AND METHOD TO IMPROVE I/O PERFORMANCE OF DATA ANALYTIC WORKLOADS - A method and structure for processing an application program on a computer. In a memory of the computer executing the application, an in-memory cache structure is provided for normally temporarily storing data produced in the processing. An in-memory storage outside the in-memory cache structure is provided in the memory, for by-passing the in-memory cache structure for temporarily storing data under a predetermined condition. A sensor detects an amount of usage of the in-memory cache structure used to store data during the processing. When it is detected that the amount of usage exceeds the predetermined threshold, the processing is controlled so that the data produced in the processing is stored in the in-memory storage rather than in the in-memory cache structure. | 11-22-2012 |
20130198740 | INTEGRATED VIRTUAL INFRASTRUCTURE SYSTEM - A technique is provided for creating virtual units in a computing environment. A virtual system definition is received by a processor that is utilized to create the virtual units for a virtual system. Relationship constraints between the virtual units in the virtual system are received by the processor. The relationship constraints between the virtual units include a communication link requirement between the virtual units and/or a location requirement between the virtual units. The virtual units in the virtual system are deployed by the processor according to the relationship constraints between virtual units. | 08-01-2013 |