Patent application number | Description | Published |
20080235684 | Heuristic Based Affinity Dispatching for Shared Processor Partition Dispatching - A mechanism is provided for determining whether to use cache affinity as a criterion for software thread dispatching in a shared processor logical partitioning data processing system. The server firmware may store data about when and/or how often logical processors are dispatched. Given these data, the operating system may collect metrics. Using the logical processor metrics, the operating system may determine whether cache affinity is likely to provide a significant performance benefit relative to the cost of dispatching a particular logical processor to the operating system. | 09-25-2008 |
20090033490 | System and Method of Dynamically Weighted Analysis for Intrusion Decision-Making - An intrusion detection mechanism is provided for flexible, automatic, thorough, and consistent security checking and vulnerability resolution in a heterogeneous environment. The mechanism may provide a predefined number of default intrusion analysis approaches, such as signature-based, anomaly-based, scan-based, and danger theory. The intrusion detection mechanism also allows a limitless number of intrusion analysis approaches to be added on the fly. Using an intrusion detection skin, the mechanism allows various weights to be assigned to specific intrusion analysis approaches. The mechanism may adjust these weights dynamically. The score ration can be tailored to determine if an intrusion occurred and adjusted dynamically. Also, multiple security policies for any type of computing element may be enforced. | 02-05-2009 |
20090119474 | PARTITION REDISPATCHING USING PAGE TRACKING - Illustrated embodiments provide a computer implemented method and data processing system for redispatching a partition by tracking a set of memory pages, belonging to the dispatched partition. In one illustrative embodiment the computer implemented method comprises finding an effective page address to real page address mapping for a page address miss to create a found real page address and page size combination, responsive to determining the page address miss in a page addressing buffer, and saving the found real page address and page size combination as an entry in set of entries in an array. Further in the computer implemented method, creating a preserved array from the array, responsive to determining the dispatched partition to be an undispatched partition. The computer implemented method further, analyzing each entry of the preserved array for a compressed page, responsive to determining the undispatched partition is now redispatched, and invoking a partition management firmware function to decompress the compressed page, prior to the partition being redispatched, responsive to determining a compressed page. | 05-07-2009 |
20090125589 | RECONNECTION TO AND MIGRATION OF ELECTRONIC COLLABORATION SESSIONS - The invention generally relates to electronic collaboration sessions and, more particularly, to systems and methods for providing reconnection to and migration of electronic collaboration sessions. A method for managing a collaboration session includes providing a computer infrastructure structured and arranged to store data regarding a plurality of clients of a collaboration session, and monitor a connection of each of the plurality of clients to a host system. The computer infrastructure is further operable to (i) migrate the plurality of clients to a new host system after determining that a number of the plurality of clients experiencing connection problems with the first host system exceeds a threshold value and/or (ii) present customized summary data to at least one of the plurality of clients after the at least one of the plurality of clients reconnects to the collaboration session after having been disconnected from the collaboration session. | 05-14-2009 |
20090182893 | CACHE COHERENCE IN A VIRTUAL MACHINE MANAGED SYSTEM - A method, a system, and computer readable program code for managing cache coherence in a virtual machine managed system are provided. In response to a processor issuing a message to be broadcast, a determination is made as to whether the processor is part of a virtual domain. In response to a determination that the processor is part of the virtual domain, the message and a first bit mask are sent from a source node to a destination node. In response to receiving the message and the first bit mask, one of a primary link or a secondary link is selected to send the message and the first bit mask over, forming a selected link. The message and the first bit mask are sent to the destination node over the selected link. | 07-16-2009 |
20090204959 | METHOD AND APPARATUS FOR VIRTUAL PROCESSOR DISPATCHING TO A PARTITION BASED ON SHARED MEMORY PAGES - The present invention provides a computer implemented method, data processing system, and computer program product for mapping and dispatching virtual processors in a data processing system having at least a first partition and a second partition. The data processing system runs a first partition on a virtual processor during a first timeslice. The data processing system identifies an at least one physical page used by the first partition and the second partition. The data processing system maps the at least one physical page to the first partition and the second partition. The data processing system determines a fitness value based on the mapping. The data processing system dispatches the Virtual processor to the second partition on a second timeslice based on the fitness value, wherein the second timeslice immediately succeeds after the first timeslice, whereby the at least one physical page remains in cache during at least the first timeslice and the second timeslice. | 08-13-2009 |
20090217283 | SYSTEM UTILIZATION THROUGH DEDICATED UNCAPPED PARTITIONS - Improving system resource utilization in a data processing system is provided. A determination is made as to whether there is at least one ceded virtual processor in a plurality of virtual processors in a shared resource pool. Responsive to existence of the at least one ceded virtual processor, a determination is made as to whether there is at least one dedicated logical partition configured for a hybrid mode. Responsive to identifying at least one hybrid configured dedicated logical partition, a determination is made as to whether the at least one hybrid configured dedicated logical partition requires additional virtual processor cycles. If the at least one hybrid configured dedicated logical partition requiring additional virtual processor cycles, the at least one ceded virtual processor is deallocated from the plurality of virtual processors and allocated to a surrogate resource pool for use by the at least one hybrid configured dedicated logical partition. | 08-27-2009 |
20100115522 | MECHANISM TO CONTROL HARDWARE MULTI-THREADED PRIORITY BY SYSTEM CALL - A method, a system and a computer program product for controlling the hardware priority of hardware threads in a data processing system. A Thread Priority Control (TPC) utility assigns a primary level and one or more secondary levels of hardware priority to a hardware thread. When a hardware thread initiates execution in the absence of a system call, the TPC utility enables execution based on the primary level. When the hardware thread initiates execution within a system call, the TPC utility dynamically adjusts execution from the primary level to the secondary level associated with the system call. The TPC utility adjusts hardware priority levels in order to: (a) raise the hardware priority of one hardware thread relative to another; (b) reduce energy consumed by the hardware thread; and (c) fulfill requirements of time critical hardware sections. | 05-06-2010 |
20100223622 | Non-Uniform Memory Access (NUMA) Enhancements for Shared Logical Partitions - In a NUMA-topology computer system that includes multiple nodes and multiple logical partitions, some of which may be dedicated and others of which are shared, NUMA optimizations are enabled in shared logical partitions. This is done by specifying a home node parameter in each virtual processor assigned to a logical partition. When a task is created by an operating system in a shared logical partition, a home node is assigned to the task, and the operating system attempts to assign the task to a virtual processor that has a home node that matches the home node for the task. The partition manager then attempts to assign virtual processors to their corresponding home nodes. If this can be done, NUMA optimizations may be performed without the risk of reducing the performance of the shared logical partition. | 09-02-2010 |
20100274938 | PERIPHERAL ADAPTER INTERRUPT FREQUENCY CONTROL BY ESTIMATING PROCESSOR LOAD AT THE PERIPHERAL ADAPTER - Interrupt frequency control by estimating processor load in the peripheral adapter provides adaptive interrupt latency to improve performance in a processing system. A mathematical function of the depth of one or more queues of the adapter is compared to its historical value in order to provide an estimate of processor load. The estimated processor load is then used to set a parameter that controls the frequency of an interrupt generator, which may be controlled by setting an interrupt queue depth threshold, packet frequency threshold or interrupt hold-off time value. The mathematical function may be the ratio of the transmit queue depth to the receive queue depth and the historical value may be predetermined, user-settable, obtained during a calibration interval or obtained by taking a long-term average of the mathematical function of the queue depths. | 10-28-2010 |
20110010709 | Optimizing System Performance Using Spare Cores in a Virtualized Environment - A mechanism for optimizing system performance using spare processing cores in a virtualized environment. When detecting a workload partition needs to run on a virtual processor in the virtualized system, a state of the virtual processor is changed to a wait state. A first node comprising memory that is local to the workload partition is determined. A determination is also made as to whether a non-spare processor core in the first node is available to run the workload partition. If no non-spare processor core is available, a free non-spare processor core in a second node is located, and the state of the free non-spare processor core in the second node is changed to an inactive state. The state of a spare processor core in the first node is changed to an active state, and the workload partition is dispatched to the spare processor core in the first node for execution. | 01-13-2011 |
20110107031 | Extended Cache Capacity - A method, programmed medium and system are provided for enabling a core's cache capacity to be increased by using the caches of the disabled or non-enabled cores on the same chip. Caches of disabled or non-enabled cores on a chip are made accessible to store cachelines for those chip cores that have been enabled, thereby extending cache capacity of enabled cores. | 05-05-2011 |
20110145505 | Assigning Cache Priorities to Virtual/Logical Processors and Partitioning a Cache According to Such Priorities - Mechanisms are provided, for implementation in a data processing system having at least one physical processor and at least one associated cache memory, for allocating cache resources of the at least one cache memory to virtual processors of the data processing system. The mechanisms identify a plurality of high priority virtual processors in the data processing system. The mechanisms further determine a percentage of cache lines of the at least one cache memory to be assigned to high priority virtual processors. Moreover, the mechanisms mark a portion of the cache lines in the at least one cache memory as being evictable by only high priority virtual processors based on the determined percentage of cache lines to be assigned to high priority virtual processors. The marked portion of the cache lines cannot be evicted by lower priority virtual processors having a priority lower than the high priority virtual processors. | 06-16-2011 |
20110161539 | OPPORTUNISTIC USE OF LOCK MECHANISM TO REDUCE WAITING TIME OF THREADS TO ACCESS A SHARED RESOURCE - Embodiments of the invention provide a method, apparatus and computer program product for enabling a thread to acquire a lock associated with a shared resource, when a locking mechanism is used therewith, wherein each embodiment reduces waiting time and enhances efficiency in using the shared resource. One embodiment is associated with a plurality of processors, which includes two or more processors that each provides a specified thread to access a shared resource. The shared resource can only be accessed by one thread at a given time, a locking mechanism enables a first one of the specified threads to access the shared resource while each of the other specified threads is retained in a waiting queue, and a second one of the specified threads occupies a position of highest priority in the queue. The method includes the step of identifying a time period between a time when the first specified thread releases access to the shared resource, and a later time when the second specified thread becomes enabled to access the shared resource. Responsive to an additional thread that is not one of the specified threads being provided by a processor to access the shared resource during the identified time period, it is determined whether a first prespecified criterion pertaining to the specified threads retained in the queue has been met. Responsive to the first criterion being met, the method determines whether a second prespecified criterion has been met, wherein the second criterion is that the number of specified threads in the queue has not decreased since a specified prior time. Responsive to the second criterion being met, the method then decides whether to enable the additional thread to access the shared resource before the second specified thread accesses the resource. | 06-30-2011 |
20110258468 | OPTIMIZING POWER MANAGEMENT IN PARTITIONED MULTICORE VIRTUAL MACHINE PLATFORMS BY UNIFORM DISTRIBUTION OF A REQUESTED POWER REDUCTION BETWEEN ALL OF THE PROCESSOR CORES - Handling requests for power reduction by first enabling a request for an amount of power change, e.g. reduction by any partition. In response to the request for power reduction, an equal proportion of the whole amount of power reduction is distributed between each of a set of cores providing the entitlements to the partitions, and the entitlement of the requesting partition is reduced by an amount corresponding to the whole amount of the power change. | 10-20-2011 |
20110276742 | Characterizing Multiple Resource Utilization Using a Relationship Model to Optimize Memory Utilization in a Virtual Machine Environment - An approach is provided that uses a hypervisor to allocate a shared memory pool amongst a set of partitions (e.g., guest operating systems) being managed by the hypervisor. The hypervisor retrieves memory related metrics from shared data structures stored in a memory, with each of the shared data structures corresponding to a different one of the partitions. The memory related metrics correspond to a usage of the shared memory pool allocated to the corresponding partition. The hypervisor identifies a memory stress associated with each of the partitions with this identification based in part on the memory related metrics retrieved from the shared data structures. The hypervisor then reallocates the shared memory pool amongst the plurality of partitions based on the identified memory stress of the plurality of partitions. | 11-10-2011 |
20120005401 | PAGE BUFFERING IN A VIRTUALIZED, MEMORY SHARING CONFIGURATION - An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory. | 01-05-2012 |
20120036292 | POLLING IN A VIRTUALIZED INFORMATION HANDLING SYSTEM - A software thread is dispatched for causing the system to poll a device for determining whether a condition has occurred. Subsequently, the software thread is undispatched and, in response thereto, an interrupt is enabled on the device, so that the device is enabled to generate the interrupt in response to an occurrence of the condition, and so that the system ceases polling the device for determining whether the condition has occurred. Eventually, the software thread is redispatched and, in response thereto, the interrupt is disabled on the device, so that the system resumes polling the device for determining whether the condition has occurred. | 02-09-2012 |
20120124254 | ESTIMATING PROCESSOR LOAD USING PERIPHERAL ADAPTER QUEUE BEHAVIOR - Techniques for estimating processor load by using queue depth information of a peripheral adapter provides processor loading information that can be used to adapt interrupt latency to improve performance in a processing system. A mathematical function of the depth of one or more queues of the adapter is compared to its historical value in order to provide an estimate of processor load. The estimated processor load can then be used to set a parameter that controls the frequency of an interrupt generator. The mathematical function may be the ratio of the transmit queue depth to the receive queue depth and the historical value may be predetermined, user-settable, obtained during a calibration interval or obtained by taking a long-term average of the mathematical function of the queue depths. | 05-17-2012 |
20130080712 | Non-Uniform Memory Access (NUMA) Enhancements for Shared Logical Partitions - In a NUMA-topology computer system that includes multiple nodes and multiple logical partitions, some of which may be dedicated and others of which are shared, NUMA optimizations are enabled in shared logical partitions. This is done by specifying a home node parameter in each virtual processor assigned to a logical partition. When a task is created by an operating system in a shared logical partition, a home node is assigned to the task, and the operating system attempts to assign the task to a virtual processor that has a home node that matches the home node for the task. The partition manager then attempts to assign virtual processors to their corresponding home nodes. If this can be done, NUMA optimizations may be performed without the risk of reducing the performance of the shared logical partition. | 03-28-2013 |
20130159614 | PAGE BUFFERING IN A VIRTUALIZED, MEMORY SHARING CONFIGURATION - An apparatus includes a processor and a volatile memory that is configured to be accessible in an active memory sharing configuration. The apparatus includes a machine-readable encoded with instructions executable by the processor. The instructions including first virtual machine instructions configured to access the volatile memory with a first virtual machine. The instructions including second virtual machine instructions configured to access the volatile memory with a second virtual machine. The instructions including virtual machine monitor instructions configured to page data out from a shared memory to a reserved memory section in the volatile memory responsive to the first virtual machine or the second virtual machine paging the data out from the shared memory or paging the data in to the shared memory. The shared memory is shared across the first virtual machine and the second virtual machine. The volatile memory includes the shared memory. | 06-20-2013 |
20130346967 | Determining Placement Fitness For Partitions Under A Hypervisor - A technique for determining placement fitness for partitions under a hypervisor in a host computing system having non-uniform memory access (NUMA) nodes. In an embodiment, a partition resource specification is received from a partition score requester. The partition resource specification identifies a set of computing resources needed for a virtual machine partition to be created by a hypervisor in the host computing system. Resource availability within the NUMA nodes of the host computing system is assessed to determine possible partition placement options. A partition fitness score of a most suitable one of the partition placement options is calculated. The partition fitness score is reported to the partition score requester. | 12-26-2013 |
20130346972 | Determining Placement Fitness For Partitions Under A Hypervisor - A technique for determining placement fitness for partitions under a hypervisor in a host computing system having non-uniform memory access (NUMA) nodes. In an embodiment, a partition resource specification is received from a partition score requester. The partition resource specification identifies a set of computing resources needed for a virtual machine partition to be created by a hypervisor in the host computing system. Resource availability within the NUMA nodes of the host computing system is assessed to determine possible partition placement options. A partition fitness score of a most suitable one of the partition placement options is calculated. The partition fitness score is reported to the partition score requester. | 12-26-2013 |
20140173595 | HYBRID VIRTUAL MACHINE CONFIGURATION MANAGEMENT - A system and technique for hybrid virtual machine configuration management includes a processor and executable logic to: assign to a first set of virtual resources associated with a virtual machine a first priority, the first set associated with entitled resources for the virtual machine; assign to a second set of virtual resources associated with the virtual machine a second priority lower than the first priority, wherein the first and seconds sets when combined exceed the entitled resources for the virtual machine; map the first set to a first physical resource of a pool of shared physical resources, the pool of shared physical resources allocatable to the first and second sets, wherein the first physical resource comprises a desired affinity level to a second physical resource allocated to the virtual machine; and preferentially allocate the first physical resource to the first set of virtual resources. | 06-19-2014 |
20140173597 | HYBRID VIRTUAL MACHINE CONFIGURATION MANAGEMENT - According to one aspect of the present disclosure, a method and technique for hybrid virtual machine configuration management is disclosed. The method includes: assigning to a first set of virtual resources associated with entitled resources of a virtual machine a first priority; assigning to a second set of virtual resources associated with the virtual machine a second priority lower than the first priority, wherein the first and seconds sets when combined exceed the entitled resources for the virtual machine; mapping the first set of virtual resources to a first physical resource of a pool of shared physical resources allocatable to the first and second sets of virtual resources, wherein the first physical resource comprises a desired affinity level to a second physical resource allocated to the virtual machine; and preferentially allocating the first physical resource to the first set of virtual resources. | 06-19-2014 |