Patent application number | Description | Published |
20080244072 | DISTRIBUTED RESOURCE ALLOCATION IN STREAM PROCESSING SYSTEMS - A system and method for resource allocation includes, in a network having nodes and links, injecting units of flow for at least one commodity at a source corresponding to the at least one commodity. At each node, queue heights, associated with the at least one commodity, are balanced for queues associated with each of one or more outgoing paths associated with that node. An amount of commodity flow is pushed across a link toward a sink, where the amount of commodity flow is constrained by a capacity constraint. Flow that reached the sink is absorbed by draining the queues. | 10-02-2008 |
20080304516 | Distributed Joint Admission Control And Dynamic Resource Allocation In Stream Processing Networks - Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes. At each node, resources are devoted to the workflow with the maximum product of downstream pressure and processing rate, where the downstream pressure is defined as the backlog difference between neighbor nodes. The primal-dual controller iteratively adjusts the admission rates and resource allocation using local congestion feedback. The iterative controlling procedure further uses an interior-point method to improve the speed of convergence towards optimal admission and allocation decisions. | 12-11-2008 |
20090037758 | USE OF T4 TIMESTAMPS TO CALCULATE CLOCK OFFSET AND SKEW - Disclosed are a method and system for calculating clock offset and skew between two clocks in a computer system. The method comprises the steps of sending data packets from a first processing unit in the computer system to a second processing unit in the computer system, and sending the data packets from the second processing unit to the first processing unit. First, second, third and fourth time stamps are provided to indicate, respectively, when the packets leave the first processing unit, arrive at the second processing unit, leave the second processing unit, and arrive at the first processing unit. The method comprises the further steps of defining a set of backward delay points using the fourth time stamps, and calculating a clock offset between clocks on the first and second processing units and clock skews of said clocks using said set of backward delay points. | 02-05-2009 |
20090070618 | SYSTEM AND METHOD FOR CALIBRATING A TOD CLOCK - A system, method and computer program product for calibrating a Time Of Day (TOD)-clock in a computing system node provided in a multi-node network. The network comprises an infrastructure of computing devices each having a physical clock providing a time base for executing operations that is stepped to a common oscillator. The system implements steps for obtaining samples of timing values of a computing device in the network, the values including a physical clock value maintained at that device and a TOD-offset value; computing an oscillator skew value from the samples; setting a fine steering rate value as equal to the opposite of the computed oscillator skew value; and, utilizing the fine steering rate value to adjust the physical clock value and correct for potential oscillator skew errors occurring in the oscillator crystal at the computing device. | 03-12-2009 |
20090106426 | METHOD AND APPARATUS FOR MODEL-BASED PAGEVIEW LATENCY MANAGEMENT - Within exemplary embodiments of the present invention a methodology for response time management for a web page-view download operation is provided. The methodology providing a server-side approach for optimizing weighted, per class, client perceived web page-view download response time. Further, a model for monitoring web page-views download latency in real-time based upon the behavior typically seen from conventional web browsers. | 04-23-2009 |
20090106445 | METHOD AND APPARATUS FOR MODEL-BASED PAGEVIEW LATENCY MANAGEMENT - Within exemplary embodiments of the present invention a methodology for response time management for a web page-view download operation is provided. The methodology providing a server-side approach for optimizing weighted, per class, client perceived web page-view download response time. Further, a model for monitoring web page-views download latency in real-time based upon the behavior typically seen from conventional web browsers. | 04-23-2009 |
20090138237 | Run-Time Characterization of On-Demand Analytical Model Accuracy - A method of determining accuracy of predicted system behavior can include creating a plurality of noise adjusted analytical models, wherein each noise adjusted analytical model is associated with a set of predefined analytical model parameters. A set of inferred analytical model parameters for each noise adjusted analytical model can be derived. Each set of inferred analytical model parameters can depend upon a current noise adjusted analytical model and each prior noise adjusted analytical model. For each set of inferred analytical model parameters, a measure of error between the set of inferred analytical model parameters and the set of predefined analytical model parameters associated with the noise adjusted analytical model from which the set of inferred analytical model parameters was derived can be determined. | 05-28-2009 |
20090268733 | Methods and Apparatus for Content Delivery via Application Level Multicast with Minimum Communication Delay - A method for constructing an overlay multicast tree to deliver data from a source to an identified group of nodes is provided in which a plurality of nodes are identified and mapped into multidimensional Euclidean space. A geometric region is constructing having a size that is the minimum size necessary to contain the source and all the nodes. Once constructed, a tree is created beginning at the source and including all of the nodes within the geometric region. | 10-29-2009 |
20090300183 | Distributed Joint Admission Control and Dynamic Resource Allocation in Stream Processing Networks - Methods and apparatus operating in a stream processing network perform load shedding and dynamic resource allocation so as to meet a pre-determined utility criterion. Load shedding is envisioned as an admission control problem encompassing source nodes admitting workflows into the stream processing network. A primal-dual approach is used to decompose the admission control and resource allocation problems. The admission control operates as a push-and-pull process with sources pushing workflows into the stream processing network and sinks pulling processed workflows from the network. A virtual queue is maintained at each node to account for both queue backlogs and credits from sinks. Nodes of the stream processing network maintain shadow prices for each of the workflows and share congestion information with neighbor nodes. At each node, resources are devoted to the workflow with the maximum product of downstream pressure and processing rate, where the downstream pressure is defined as the backlog difference between neighbor nodes. The primal-dual controller iteratively adjusts the admission rates and resource allocation using local congestion feedback. The iterative controlling procedure further uses an interior-point method to improve the speed of convergence towards optimal admission and allocation decisions. | 12-03-2009 |
20100034103 | Robust Jitter-Free Remote Clock Offset Measuring Method - A clock offset between a client and a server is measured by: (a) the client sending a request to the server; (b) upon receiving the request in step (a), the server optionally sending a server acknowledgement to the client; (c) upon the client receiving the server acknowledgement in step (b) or directly, if no acknowledgement was used, each of the client and the server proceeding to concurrently exchange their respective timestamps with each other a multiplicity (n) of times, thus forming a multiplicity (n) of timestamp exchanges; and (d) determining a plurality of apparent forwards and backwards delays based on the multiplicity (n) of timestamp exchanges. The preferred apparent forwards and backwards delays are then selected based on the minimum values (for each direction) determined in (d) above. The clock offset between client and server is then determined based on the preferred apparent forwards and backwards delays. | 02-11-2010 |
20100037081 | Method and Apparatus for Maintaining Time in a Computer System - A computer system is arranged with a circular buffer that includes a piecewise linear map from a high-resolution counter arranged to maintain International Atomic Time. The piecewise linear map includes a current leg that is currently being used and also a future leg that will be used in the future. The future leg is computed while the current leg is still being used. | 02-11-2010 |
20100076733 | METHOD AND APPARATUS FOR AUTOMATIC PERFORMANCE MODELING WITH LOAD DEPENDENT SERVICE TIMES AND OVERHEADS - A method for modeling performance of an information technology system having one or more servers for serving a number of types of transactions includes modeling a service time of each transaction type at each server and a processor overhead at each server as one of a polynomial, exponential, or logarithmic function of the average arrival rate of each transaction type at the corresponding server to generate service time and processor overhead functions and inferring optimal values of coefficients in the service time and processor overhead functions to generate a performance model of the information technology system. | 03-25-2010 |
20110106922 | OPTIMIZED EFFICIENT LPAR CAPACITY CONSOLIDATION - A method and system for optimizing a configuration of a set of LPARs and a set of servers that host the LPARs. Configuration data and optimization characteristics are received. By applying the configuration data and optimization characteristics, a best fit of the LPARs into the servers is determined, thereby determining an optimized configuration. The best fit is based on a variant of bin packing or multidimensional bin packing methodology. The optimized configuration is stored. In one embodiment, comparisons of shadow costs are utilized to determine an optimal placement of the LPARs in the servers. LPAR(s) in the set of LPARs are migrated to other server(s) in the set of servers, which results in the LPARs and servers being configured in the optimized configuration. | 05-05-2011 |
20110106968 | Techniques For Improved Clock Offset Measuring - In an exemplary aspect, method, apparatus, and program products are disclosed suitable for clock offset determination. One method includes performing a number of exchanges of at least single bytes with another network node, where values of the single bytes are different for the exchanges. The method also includes capturing and storing timestamps for each of the number of exchanges performed on the network node. A second method includes capturing and saving arrival timestamps for each of a number of timing messages in a set of timing messages received from another network node. This second method also includes sending the timestamps to at least the another node in response to completion of the set of timing messages. | 05-05-2011 |
20110225277 | PLACEMENT OF VIRTUAL MACHINES BASED ON SERVER COST AND NETWORK COST - A method, information processing system, and computer program product manage server placement of virtual machines in an operating environment. A mapping of each virtual machine in a plurality of virtual machines to at least one server in a set of servers is determined. The mapping substantially satisfies a set of primary constraints associated with the set of servers. A plurality of virtual machine clusters is created. Each virtual machine cluster includes a set of virtual machines from the plurality of virtual machines. A server placement of one virtual machine in a cluster is interchangeable with a server placement of another virtual machine in the same cluster while satisfying the set of primary constraints. A server placement of the set of virtual machines within each virtual machine on at least one mapped server is generated for each cluster. The server placement substantially satisfies a set of secondary constraints. | 09-15-2011 |
20110302578 | SYSTEM AND METHOD FOR VIRTUAL MACHINE MULTIPLEXING FOR RESOURCE PROVISIONING IN COMPUTE CLOUDS - A system and method for provisioning virtual machines in a virtualized environment includes determining a relationship between capacity need and performance for virtual machines (VMs) stored in memory storage media. Aggregate capacity needs for a plurality of VMs consolidated on a same physical server are estimated. VM combinations that yield capacity gains when provisioned jointly are identified such that when peaks and troughs are unaligned in capacity needs for a set of VMs, the set of VMs is provisioned together. | 12-08-2011 |
20120124318 | Method and Apparatus for Optimal Cache Sizing and Configuration for Large Memory Systems - A method for configuring a large hybrid memory subsystem having a large cache size in a computing system where one or more performance metrics of the computing system are expressed as an explicit function of configuration parameters of the memory subsystem and workload parameters of the memory subsystem. The computing system hosts applications that utilize the memory subsystem, and the performance metrics cover the use of the memory subsystem by the applications. A performance goal containing values for the performance metric is identified for the computing system. These values for the performance metrics are used in the explicit function of performance metrics, configuration parameters and workload parameters to calculate values for the configuration parameters that achieve the identified performance goal. The calculated values of the configuration parameters are implemented in the memory subsystem. | 05-17-2012 |
20130104140 | RESOURCE AWARE SCHEDULING IN A DISTRIBUTED COMPUTING ENVIRONMENT - Systems and methods for resource aware scheduling of processes in a distributed computing environment are described herein. One aspect provides for accessing at least one job and at least one resource on a distributed parallel computing system; generating a current reward value based on the at least one job and a current value associated with the at least one resource; generating a prospective reward value based on the at least one job and a prospective value associated with the at least one resource at a predetermined time; and scheduling the at least one job based on a comparison of the current reward value and the prospective reward value. Other embodiments and aspects are also described herein. | 04-25-2013 |
20130235992 | PREFERENTIAL EXECUTION OF METHOD CALLS IN HYBRID SYSTEMS - Affinity-based preferential call technique, in one aspect, may improve performance of distributed applications in a hybrid system having heterogeneous platforms. A segment of code in a program being executed on a processor may be intercepted or trapped in runtime. A platform is selected in the hybrid system for executing said segment of code, the platform determined to run the segment of code with best efficiency among a plurality of platforms in the hybrid system. The segment of code is dynamically executed on the selected platform determined to run the segment of code with best efficiency. | 09-12-2013 |
20130239128 | PREFERENTIAL EXECUTION OF METHOD CALLS IN HYBRID SYSTEMS - Affinity-based preferential call technique, in one aspect, may improve performance of distributed applications in a hybrid system having heterogeneous platforms. A segment of code in a program being executed on a processor may be intercepted or trapped in runtime. A platform is selected in the hybrid system for executing said segment of code, the platform determined to run the segment of code with best efficiency among a plurality of platforms in the hybrid system. The segment of code is dynamically executed on the selected platform determined to run the segment of code with best efficiency. | 09-12-2013 |
20130318305 | Method and Apparatus for Optimal Cache Sizing and Configuration for Large Memory Systems - A method for configuring a large hybrid memory subsystem having a large cache size in a computing system where one or more performance metrics of the computing system are expressed as an explicit function of configuration parameters of the memory subsystem and workload parameters of the memory subsystem. The computing system hosts applications that utilize the memory subsystem, and the performance metrics cover the use of the memory subsystem by the applications. A performance goal containing values for the performance metric is identified for the computing system. These values for the performance metrics are used in the explicit function of performance metrics, configuration parameters and workload parameters to calculate values for the configuration parameters that achieve the identified performance goal. The calculated values of the configuration parameters are implemented in the memory subsystem. | 11-28-2013 |
20130339965 | SEQUENTIAL COOPERATION BETWEEN MAP AND REDUCE PHASES TO IMPROVE DATA LOCALITY - Methods and arrangements for task scheduling. At least one job is assimilated from at least one node, each job comprising at least a map phase and a reduce phase, each of the map and reduce phases comprising at least one task. Progress of a map phase of at least one job is compared with progress of a reduce phase of at least one job. Launching of a task of a reduce phase of at least one job is scheduled in response to progress of the reduce phase of at least one job being less than progress of the map phase of at least one job. | 12-19-2013 |
20130339966 | SEQUENTIAL COOPERATION BETWEEN MAP AND REDUCE PHASES TO IMPROVE DATA LOCALITY - Methods and arrangements for task scheduling. At least one job is assimilated from at least one node, each job comprising at least a map phase and a reduce phase, each of the map and reduce phases comprising at least one task. Progress of a map phase of at least one job is compared with progress of a reduce phase of at least one job. Launching of a task of a reduce phase of at least one job is scheduled in response to progress of the reduce phase of at least one job being less than progress of the map phase of at least one job. | 12-19-2013 |
20140089932 | CONCURRENCY IDENTIFICATION FOR PROCESSING OF MULTISTAGE WORKFLOWS - A system and method may be utilized to identify concurrency levels of processing stages in a distributed system, identify common resources and bottlenecks in the distributed system using the identified concurrency levels, and allocate resources in the distributed system using the identified concurrency levels. | 03-27-2014 |
20140089934 | CONCURRENCY IDENTIFICATION FOR PROCESSING OF MULTISTAGE WORKFLOWS - A system and method may be utilized to identify concurrency levels of processing stages in a distributed system, identify common resources and bottlenecks in the distributed system using the identified concurrency levels, and allocate resources in the distributed system using the identified concurrency levels. | 03-27-2014 |
20140282583 | DYNAMIC MEMORY MANAGEMENT WITH THREAD LOCAL STORAGE USAGE - Methods and arrangements for dynamic memory management. Data are accepted for thread local storage, and memory usage is monitored in thread local storage. A memory block is allocated to thread local storage for storing accepted data, based on the monitored memory usage. Other variants and embodiments are broadly contemplated herein. | 09-18-2014 |
20140310236 | Out-of-Order Execution of Strictly-Ordered Transactional Workloads - A method of transaction processing includes receiving a plurality of transactions from an execution queue, acquiring a plurality of locks corresponding to data items needed for execution of the plurality of transactions, executing each transaction of the plurality of transactions upon acquiring all locks needed for execution of each transaction, and releasing the locks needed for execution of each transaction of the plurality of transactions upon committing each transaction. The plurality of transactions have a specified order within the execution queue, the plurality of locks are sequentially acquired based on the specified order of the plurality of transactions within the execution queue, and an order of execution of the plurality of transactions is different from the specified order of the plurality of transactions within the execution queue. | 10-16-2014 |
20140310239 | Executing Distributed Globally-Ordered Transactional Workloads in Replicated State Machines - A method of transaction replication includes transmitting at least one transaction received during an epoch from a local node to remote nodes of a domain of 2N+1 nodes at the end of an epoch (N is an integer greater than or equal to 1). The remote nodes log receipt of the at least one transaction, notify the local node of the receipt of the at least one transaction, transmit the at least one transaction to all of the 2N+1 nodes, and add the at least one transaction to an execution order upon receiving at least N+1 copies of the at least one transaction. | 10-16-2014 |
20140310240 | EXECUTING DISTRIBUTED GLOBALLY-ORDERED TRANSACTIONAL WORKLOADS IN REPLICATED STATE MACHINES - A method of transaction replication includes transmitting at least one transaction received during an epoch from a local node to remote nodes of a domain of 2N+1 nodes at the end of an epoch (N is an integer greater than or equal to 1). The remote nodes log receipt of the at least one transaction, notify the local node of the receipt of the at least one transaction, transmit the at least one transaction to all of the 2N+1 nodes, and add the at least one transaction to an execution order upon receiving at least N+1 copies of the at least one transaction. | 10-16-2014 |
20140310253 | OUT-OF-ORDER EXECUTION OF STRICTLY-ORDERED TRANSACTIONAL WORKLOADS - A method of transaction processing includes receiving a plurality of transactions from an execution queue, acquiring a plurality of locks corresponding to data items needed for execution of the plurality of transactions, executing each transaction of the plurality of transactions upon acquiring all locks needed for execution of each transaction, and releasing the locks needed for execution of each transaction of the plurality of transactions upon committing each transaction. The plurality of transactions have a specified order within the execution queue, the plurality of locks are sequentially acquired based on the specified order of the plurality of transactions within the execution queue, and an order of execution of the plurality of transactions is different from the specified order of the plurality of transactions within the execution queue. | 10-16-2014 |
20140310712 | SEQUENTIAL COOPERATION BETWEEN MAP AND REDUCE PHASES TO IMPROVE DATA LOCALITY - Methods and arrangements for task scheduling. A job is accepted, the job comprising a plurality of phases, each of the phases comprising at least one task. For each of a plurality of slots, a fetching cost associated with receipt of one or more of the tasks is determined. The slots are grouped into a plurality of sets. A pair of thresholds is determined for each of the sets, the thresholds being associated with the determined fetching costs and comprising upper and lower numerical bounds for guiding receipt of one or more of the tasks. Other variants and embodiments are broadly contemplated herein. | 10-16-2014 |
20140380320 | JOINT OPTIMIZATION OF MULTIPLE PHASES IN LARGE DATA PROCESSING - Methods and arrangements for task scheduling. A plurality of jobs is received, each job comprising at least a map phase, a copy/shuffle phase and a reduce phase. For each job, there are determined a map phase execution time and a copy/shuffle phase execution time. Each job is classified into at least one group based on at least one of: the determined map phase execution time and the determined copy/shuffle phase execution time. The plurality of jobs are executed via processor sharing, and the executing includes determining a similarity measure between jobs based on current job execution progress. Other variants and embodiments are broadly contemplated herein. | 12-25-2014 |
20150019198 | METHOD TO APPLY PERTURBATION FOR RESOURCE BOTTLENECK DETECTION AND CAPACITY PLANNING - Inducing perturbation by varying a supply amount of the resource type in the system and measuring performance of the software entity at multiple variation levels of the supply amount of the resource type in the system. A model may be built that characterizes a relationship between the measured performance and the variation levels. The model may be applied to detect the resource bottleneck. The model may be also applied for capacity planning. | 01-15-2015 |
20150020076 | METHOD TO APPLY PERTURBATION FOR RESOURCE BOTTLENECK DETECTION AND CAPACITY PLANNING - Inducing perturbation by varying a supply amount of the resource type in the system and measuring performance of the software entity at multiple variation levels of the supply amount of the resource type in the system. A model may be built that characterizes a relationship between the measured performance and the variation levels. The model may be applied to detect the resource bottleneck. The model may be also applied for capacity planning. | 01-15-2015 |
20150066598 | PREDICTING SERVICE DELIVERY COSTS UNDER BUSINESS CHANGES - A method for predicting service delivery costs for a changed business requirement including detecting an infrastructure change corresponding to the changed business requirement affecting a computer server, deriving a service delivery workload change of the computer server from the infrastructure change, and determining a service delivery cost of the computer server based on the service delivery workload change. | 03-05-2015 |