Patent application number | Description | Published |
20080271029 | Thread Scheduling with Weak Preemption Policy - Thread scheduling with a weak preemption policy is provided. The scheduler receives requests from newly ready work. The scheduler adds a “preempt value” to the current work's priority so that it is somewhat increased for preemption purposes. The preempt value can be adjusted in order to make it more, or less, difficult for newly ready work to preempt the current work. A “less strict” preemption policy allows current work to complete rather than interrupting the current work and resume it at a later time, thus saving system overhead. Newly ready work that is queued with a better priority than the current work is queued in a favorable position to be executed after the current work is completed but before other work that has been queued with the same priority of the current work. | 10-30-2008 |
20080294881 | METHOD AND APPARATUS FOR INSTRUCTION COMPLETION STALL IDENTIFICATION IN AN INFORMATION HANDLING SYSTEM - An information handling system includes a processor that executes multiple instructions or instruction threads within a software application program. The information handling system includes operating system software that manages processor system hardware and software in a multi-tasking environment. In one embodiment, the operating system manages instruction completion stall analysis software to determine the cause or causes of instruction stalls. In another embodiment, the stall analysis software cooperates with the operating system software to store instruction completion stall event data on a per instruction basis while the application program executes. The operating system software may cooperate with the stall analysis software to store instruction completion stall data in memory for later manipulation by system users or other software. | 11-27-2008 |
20090024800 | METHOD AND SYSTEM FOR USING UPPER CACHE HISTORY INFORMATION TO IMPROVE LOWER CACHE DATA REPLACEMENT - A system for managing data in a plurality of storage locations. In response to a least recently used algorithm wanting to move data from a cache to a storage location, an aging table is searched for an associated entry for the data. In response to finding the associated entry for the data in the aging table, an indicator is enabled on the data. In response to determining that the indicator is enabled on the data, the data is kept in the cache despite the least recently used algorithm wanting to move the data to the storage location. | 01-22-2009 |
20090049278 | EFFICIENT MEMORY UPDATE PROCESS FOR ON-THE-FLY INSTRUCTION TRANSLATION FOR WELL BEHAVED APPLICATIONS EXECUTING ON A WEAKLY-ORDERED PROCESSOR - A multiprocessor data processing system (MDPS) with a weakly-ordered architecture providing processing logic for substantially eliminating issuing sync instructions after every store instruction of a well-behaved application. Instructions of a well-behaved application are translated and executed by a weakly-ordered processor. The processing logic includes a lock address tracking utility (LATU), which provides an algorithm and a table of lock addresses, within which each lock address is stored when the lock is acquired by the weakly-ordered processor. When a store instruction is encountered in the instruction stream, the LATU compares the target address of the store instruction against the table of lock addresses. If the target address matches one of the lock addresses, indicating that the store instruction is the corresponding unlock instruction (or lock release instruction), a sync instruction is issued ahead of the store operation. The sync causes all values updated by the intermediate store operations to be flushed out to the point of coherency and be visible to all processors. | 02-19-2009 |
20090106762 | Scheduling Threads In A Multiprocessor Computer - Methods, systems, and computer program products are provided for scheduling threads in a multiprocessor computer. Embodiments include selecting a thread in a ready queue to be dispatched to a processor and determining whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, embodiments typically include selecting a processor, setting a current processor priority register of the selected processor to least favored, and dispatching the thread from the ready queue to the selected processor. In some embodiments, setting the current processor priority register of the selected processor to least favored is carried out by storing a value associated with the highest interrupt priority in the current processor priority register. | 04-23-2009 |
20090119474 | PARTITION REDISPATCHING USING PAGE TRACKING - Illustrated embodiments provide a computer implemented method and data processing system for redispatching a partition by tracking a set of memory pages, belonging to the dispatched partition. In one illustrative embodiment the computer implemented method comprises finding an effective page address to real page address mapping for a page address miss to create a found real page address and page size combination, responsive to determining the page address miss in a page addressing buffer, and saving the found real page address and page size combination as an entry in set of entries in an array. Further in the computer implemented method, creating a preserved array from the array, responsive to determining the dispatched partition to be an undispatched partition. The computer implemented method further, analyzing each entry of the preserved array for a compressed page, responsive to determining the undispatched partition is now redispatched, and invoking a partition management firmware function to decompress the compressed page, prior to the partition being redispatched, responsive to determining a compressed page. | 05-07-2009 |
20090138911 | VIDEO BROADCASTING SYSTEM - A method, medium and implementing processing system, are provided in which premium programming content is included in a standard program broadcasting system. The added content is stored at a user site for subsequent viewing at the user's convenience. The receipt and storing of the premium programming is accomplished without interfering with the receipt of standard broadcast signals. The premium programming, in one example, is transmitted and incrementally received and stored on a user's system even while standard programming is received and viewed by the user. When all of the broadcast increments of a premium program have been received and the premium program has been stored in the user's system, a signal is provided to the user to indicate the availability of the premium program for selective viewing by the user. | 05-28-2009 |
20090182893 | CACHE COHERENCE IN A VIRTUAL MACHINE MANAGED SYSTEM - A method, a system, and computer readable program code for managing cache coherence in a virtual machine managed system are provided. In response to a processor issuing a message to be broadcast, a determination is made as to whether the processor is part of a virtual domain. In response to a determination that the processor is part of the virtual domain, the message and a first bit mask are sent from a source node to a destination node. In response to receiving the message and the first bit mask, one of a primary link or a secondary link is selected to send the message and the first bit mask over, forming a selected link. The message and the first bit mask are sent to the destination node over the selected link. | 07-16-2009 |
20100017551 | BUS ACCESS MODERATION SYSTEM - A method, programmed medium and system are provided in which system bus traffic is moderated with real-time data. The Operating System (OS) is enabled to get information from the firmware (FW) to determine if a resource threshold has been reached. This is accomplished by generating an interrupt to flag the OS when a bus request retry rate has reached a predetermined number. The system firmware plays an integral role in this mechanism, and should be interpreted as a general term which could also include a hypervisor technology. The system firmware will report the bus request retry rate to the operating system by way of, for example, a firmware-generated interrupt. The OS may have something similar to a kernel daemon/service running to intercept the interrupt notice. In the simplest case, the daemon/service will determine if the threshold has been met based on the feedback from the firmware. If so, it will generate a system call that will moderate traffic with an operating system tunable. In one example, the number of simultaneous multithreading (SMT) threads per core will be reduced using a system call. This effectively throttles back the amount of logical threads per core and effectively alleviates the bus request saturation. | 01-21-2010 |
20100180089 | MANAGING THERMAL CONDITION OF A MEMORY - A method, system, and computer usable program product for managing thermal condition of a memory are provided in the illustrative embodiments. A condition that a threshold value of a thermal condition of the memory has been exceeded or is likely to be exceeded is identified. A portion of a first workload is identified as being a cause of exceeding the threshold. A second portion of a second workload is identified, the second portion not causing the threshold to be exceeded when executed. A set of operations corresponding to the first portion is interleaved with a second set of operations corresponding to the second portion. The interleaved first and second portions of the first and second workloads are executed, causing the thermal condition of the memory to remain below the threshold. The second portion may use a second memory, a second area of the memory, or a combination thereof when executing. | 07-15-2010 |
20110010709 | Optimizing System Performance Using Spare Cores in a Virtualized Environment - A mechanism for optimizing system performance using spare processing cores in a virtualized environment. When detecting a workload partition needs to run on a virtual processor in the virtualized system, a state of the virtual processor is changed to a wait state. A first node comprising memory that is local to the workload partition is determined. A determination is also made as to whether a non-spare processor core in the first node is available to run the workload partition. If no non-spare processor core is available, a free non-spare processor core in a second node is located, and the state of the free non-spare processor core in the second node is changed to an inactive state. The state of a spare processor core in the first node is changed to an active state, and the workload partition is dispatched to the spare processor core in the first node for execution. | 01-13-2011 |
20110022803 | Two Partition Accelerator and Application of Tiered Flash to Cache Hierarchy in Partition Acceleration - An approach is provided to identify a disabled processing core and an active processing core from a set of processing cores included in a processing node. Each of the processing cores is assigned a cache memory. The approach extends a memory map of the cache memory assigned to the active processing core to include the cache memory assigned to the disabled processing core. A first amount of data that is used by a first process is stored by the active processing core to the cache memory assigned to the active processing core. A second amount of data is stored by the active processing core to the cache memory assigned to the inactive processing core using the extended memory map. | 01-27-2011 |
20110107031 | Extended Cache Capacity - A method, programmed medium and system are provided for enabling a core's cache capacity to be increased by using the caches of the disabled or non-enabled cores on the same chip. Caches of disabled or non-enabled cores on a chip are made accessible to store cachelines for those chip cores that have been enabled, thereby extending cache capacity of enabled cores. | 05-05-2011 |
20120042131 | Flexible use of extended cache using a partition cache footprint - An approach is provided to identifying cache extension sizes that correspond to different partitions that are running on a computer system. The approach extends a first hardware cache associated with a first processing core that is included in the processor's silicon substrate with a first memory allocation from a system memory area, with the system memory area being external to the silicon substrate and the first memory allocation corresponding to one of the plurality of cache extension sizes that corresponds to one of the partitions that is running on the computer system. The approach further extends a second hardware cache associated with a second processing core also included in the processor's silicon substrate with a second memory allocation from the system memory area with the second memory allocation corresponding to another of the cache extension sizes that corresponds to a different partitions that is being executed by the second processing core. | 02-16-2012 |
20120215982 | Partial Line Cache Write Injector for Direct Memory Access Write - A cache within a computer system receives a partial write request and identifies a cache hit of a cache line. The cache line corresponds to the partial write request and includes existing data. In turn, the cache receives partial write data and merges the partial write data with the existing data into the cache line. In one embodiment, the existing data is “modified” or “dirty.” In another embodiment, the existing data is “shared.” In this embodiment, the cache changes the state of the cache line to indicate the storing of the partial write data into the cache line. | 08-23-2012 |
20120260257 | SCHEDULING THREADS IN MULTIPROCESSOR COMPUTER - A computer program product for scheduling threads in a multiprocessor computer comprises computer program instructions configured to select a thread in a ready queue to be dispatched to a processor and determine whether an interrupt mask flag is set in a thread control block associated with the thread. If the interrupt mask flag is set in the thread control block associated with the thread, the computer program instructions are configured to select a processor, set a current processor priority register of the selected processor to least favored, and dispatch the thread from the ready queue to the selected processor. | 10-11-2012 |
20140156979 | Performance in Predicting Branches - A method for processing instructions. The instructions are processed by a processor unit while using a first table in a plurality of tables to predict a set of instructions needed by the processor unit after processing of a conditional instruction. An identification is formed that a rate of success in correctly predicting the set of instructions when using the first table is less than a threshold number. A sequence of the instructions being processed by the processor unit is searched for an instruction that matches a marker in a set of markers for identifying when to use the plurality of tables. An identification that the instruction that matches the marker is formed. A second table from the plurality of tables referenced by the marker is identified. The second table is used in place of the first table. | 06-05-2014 |