Patent application number | Description | Published |
20090063443 | System and Method for Dynamically Supporting Indirect Routing Within a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for dynamically routing data through the data processing system. Data is received at a first processor that is to be transmitted to a destination processor. The data that is received includes address information. A lookup is performed in routing table data structures based on the address information to identify candidate paths through which the data is routed to the destination processor. A determination is made as to whether any of the candidate paths are not able to be used to route the data to the destination processor based on a setting of at least one identifier. A path is selected from the identified candidate paths for routing of the data based on a setting of the at least one identifier. Then, the data is transmitted from the first processor along the selected path toward the destination processor. | 03-05-2009 |
20090063444 | System and Method for Providing Multiple Redundant Direct Routes Between Supernodes of a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for selecting, from a plurality of routes through the data processing system, a direct route for transmitting data. Data that includes address information is received at a first processor that is to be transmitted to a destination processor. Using routing table data structures, direct route entries are identified that correspond to direct routes for transmitting data. An accessed priority table data structure comprises a priority entry for each entry in the routing table data structures. The priority entry specifies a priority of a corresponding entry in the routing table data structures. A direct route entry is selected that corresponds to a direct route from the routing table data structures, based on specified priorities. Then the data is transmitted from the first processor to the destination processor using a path corresponding to the selected direct route entry. | 03-05-2009 |
20090063445 | System and Method for Handling Indirect Routing of Information Between Supernodes of a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for selecting, from a plurality of routes through the data processing system, an indirect route for transmitting data. Data that includes address information is received at a first processor that is to be transmitted to a destination processor. Using routing table data structures, indirect route entries are identified that correspond to indirect routes for transmitting data. An accessed priority table data structure comprises a priority entry for each entry in the routing table data structures. The priority entry specifies a priority of a corresponding entry in the routing table data structures. An indirect route entry is selected that corresponds to an indirect route from the routing table data structures, based on specified priorities. Then the data is transmitted from the first processor to the destination processor using a path corresponding to the selected indirect route entry. | 03-05-2009 |
20090063728 | System and Method for Direct/Indirect Transmission of Information Using a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for transmitting data in a data network. A first processor of the data network receives data to be transmitted to a second processor within the data network. A determination is made if the data has previously been routed through an indirect communication link from a source processor, the indirect communication link being a communication link that does not directly couple the source processor to a final destination processor which is to receive the data. A communication link is selected over which to transmit the data from the first processor to the second processor based on results of determining if the data has previously been routed through an indirect communication link. Finally, the data is transmitted from the first processor to the second processor using the selected communication link. | 03-05-2009 |
20090063811 | System for Data Processing Using a Multi-Tiered Full-Graph Interconnect Architecture - A system is provided for implementing a multi-tiered full-graph interconnect architecture. In order to implement a multi-tiered full-graph interconnect architecture, a plurality of processors are coupled to one another to create a plurality of processor books. The plurality of processor books are coupled together to create a plurality of supernodes. Then, the plurality of supernodes are coupled together to create the multi-tiered full-graph interconnect architecture. Data is then transmitted from one processor to another within the multi-tiered full-graph interconnect architecture based on an addressing scheme that specifies at least a supernode and a processor book associated with a target processor to which the data is to be transmitted. | 03-05-2009 |
20090063814 | System and Method for Routing Information Through a Data Processing System Implementing a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for routing information through the data processing system. Data is received at a source processor within a set of processors that is to be transmitted to a destination processor, where the data includes address information. A first determination is performed as to whether the destination processor is within a same processor book as the source processor based on the address information. A second determination is performed as to whether the destination processor is within a same supernode as the source processor based on the address information if the destination processor is not within the same processor book. A routing path is identified for the data based on results of the first determination, the second determination, and one or more routing table data structures. The data is then transmitted from the source processor along the identified routing path toward the destination processor. | 03-05-2009 |
20090063815 | System and Method for Providing Full Hardware Support of Collective Operations in a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for performing collective operations. In hardware of a parent processor in a first processor book, a number of other processors are determined in a same or different processor book of the data processing system that is needed to execute the collective operation, thereby establishing a plurality of processors comprising the parent processor and the other processors. In hardware of the parent processor, the plurality of processors are logically arranged as a plurality of nodes in a hierarchical structure. The collective operation is transmitted to the plurality of processors based on the hierarchical structure. In hardware of the parent processor, results are received from the execution of the collective operation from the other processors, a final result is generated of the collective operation based on the received results, and the final result is output. | 03-05-2009 |
20090063816 | System and Method for Performing Collective Operations Using Software Setup and Partial Software Execution at Leaf Nodes in a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for performing collective operations. In software executing on a parent processor in a first processor book, a number of other processors are determined in a same or different processor book of the data processing system that is needed to execute the collective operation, thereby establishing a plurality of processors comprising the parent processor and the other processors. In software executing on the parent processor, the plurality of processors are logically arranged as a plurality of nodes in a hierarchical structure. The collective operation is transmitted to the plurality of processors based on the hierarchical structure. In hardware of the parent processor, results are received from the execution of the collective operation from the other processors, a final result is generated of the collective operation based on the received results, and the final result is output. | 03-05-2009 |
20090063817 | System and Method for Packet Coalescing in Virtual Channels of a Data Processing System in a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for packet coalescing in virtual channels of a data processing system. A first processor bundles original data to be transmitted to a destination processor, the original data provided by a first source processor. The first processor transmits the bundle of data to a second processor along a path to the destination processor. The second processor determines if the second processor has additional data destined for the same destination processor, the additional data being provided by a second source processor that is different from the first source processor. Responsive to the second processor having additional data, the second processor unbundles the original data, adds the additional data to the original data, and rebundles the data along with the additional data. Then the second processor transmits the rebundled data to at least one other processor along the path to the destination processor. | 03-05-2009 |
20090063826 | Quad aware Locking Primitive - A method and computer system for efficiently handling high contention locking in a multiprocessor computer system. At least some of the processors in the system are organized into a hierarchy, and process an interruptible lock in response to the hierarchy. The method utilizes two alternative methods of acquiring the lock, including a conditional lock acquisition primitive and an unconditional lock acquisition primitive, and an unconditional lock release primitive for releasing the lock from a particular processor. To prevent races between processors requesting a lock acquisition and a processor releasing the lock, a release flag is utilized. Furthermore, in order to ensure that the a processor utilizing the unconditional lock acquisition primitive is granted the lock, a handoff flag is utilized. | 03-05-2009 |
20090063880 | System and Method for Providing a High-Speed Message Passing Interface for Barrier Operations in a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided performing a Message Passing Interface (MPI) job. A first processor chip receives a set of arrival signals from a set of processor chips executing tasks of the MPI job in the data processing system. The arrival signals identify when a processor chip executes a synchronization operation for synchronizing the tasks for the MPI job. Responsive to receiving the set of arrival signals from the set of processor chips, the first processor chip identifies a fastest processor chip of the set of processor chips whose arrival signal arrived first. An operation of the fastest processor chip is modified based on the identification of the fastest processor chip. The set of processor chips comprises processor chips that are in one of a same processor book or a different processor book of the data processing system. | 03-05-2009 |
20090063885 | System and Computer Program Product for Modifying an Operation of One or More Processors Executing Message Passing Interface Tasks - A system and computer program product for modifying an operation of one or more processors executing message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 03-05-2009 |
20090063891 | System and Method for Providing Reliability of Communication Between Supernodes of a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for providing reliability of communication. A first processor determines a current state of links coupled to ports of a first processor of the data processing system. Each port of the first processor comprises a plurality of links to a corresponding port on a second processor of the data processing system. The current state of the links indicates a level of error associated with each link. The first processor determines, for each link, if a level of error associated with the link exceeds a threshold. For each link whose level of error exceeds the threshold, the first processor tags the link with an error identifier in a switch associated with the ports of the first processor. The first processor reduces a level of usage for transmitting data on ports associated with links tagged with the error identifier. | 03-05-2009 |
20090064139 | Method for Data Processing Using a Multi-Tiered Full-Graph Interconnect Architecture - A method is provided for implementing a multi-tiered full-graph interconnect architecture. In order to implement a multi-tiered full-graph interconnect architecture, a plurality of processors are coupled to one another to create a plurality of processor books. The plurality of processor books are coupled together to create a plurality of supernodes. Then, the plurality of supernodes are coupled together to create the multi-tiered full-graph interconnect architecture. Data is then transmitted from one processor to another within the multi-tiered full-graph interconnect architecture based on an addressing scheme that specifies at least a supernode and a processor book associated with a target processor to which the data is to be transmitted. | 03-05-2009 |
20090064140 | System and Method for Providing a Fully Non-Blocking Switch in a Supernode of a Multi-Tiered Full-Graph Interconnect Architecture - A method, computer program product, and system are provided for transmitting data from a first processor of a data processing system to a second processor of the data processing system. In one or more switches, a set of virtual channels is created, the one or more switches comprising, for each processor, a corresponding switch in the one or more switches. The data is transmitted from the first processor to the second processor through a path comprising a subset of processors of a set of processors in the data processing system. In each processor of the subset of processors, the data is stored in a virtual channel of a corresponding switch before transmitting the data to a next processor. The virtual channel of the corresponding switch in which the data is stored corresponds to a position of the processor in the path through which the data is transmitted. | 03-05-2009 |
20090064165 | Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks - A method for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 03-05-2009 |
20090064166 | System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks - A system and method for providing hardware based dynamic load balancing of message passing interface (MPI) tasks are provided. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 03-05-2009 |
20090064167 | System and Method for Performing Setup Operations for Receiving Different Amounts of Data While Processors are Performing Message Passing Interface Tasks - A system and method are provided for performing setup operations for receiving a different amount of data while processors are performing message passing interface (MPI) tasks. Mechanisms for adjusting the balance of processing workloads of the processors are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. An MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, setup operations may be performed while processors are performing MPI tasks to prepare for receiving different sized portions of data in a subsequent computation cycle based on the history. | 03-05-2009 |
20090064168 | System and Method for Hardware Based Dynamic Load Balancing of Message Passing Interface Tasks By Modifying Tasks - A system and method are provided for providing hardware based dynamic load balancing of message passing interface (MPI) tasks by modifying tasks. Mechanisms for adjusting the balance of processing workloads of the processors executing tasks of an MPI job are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. Each processor has an associated hardware implemented MPI load balancing controller. The MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. Thus, operations may be performed to shift workloads from the slowest processor to one or more of the faster processors. | 03-05-2009 |
20090113164 | Method, System and Program Product for Address Translation Through an Intermediate Address Space - In a data processing system capable of concurrently executing multiple hardware threads of execution, an intermediate address translation unit in a processing unit translates an effective address for a memory access into an intermediate address. A cache memory is accessed utilizing the intermediate address. In response to a miss in cache memory, the intermediate address is translated into a real address by a real address translation unit that performs address translation for multiple hardware threads of execution. The system memory is accessed with the real address. | 04-30-2009 |
20090153897 | Method, System and Program Product for Reserving a Global Address Space - A method of operating a data processing system includes each of multiple tasks within a parallel job executing on multiple nodes of the data processing system issuing a respective system call to request reservation, without allocation of backing storage in physical memory, of a global address space defined by a range of effective addresses as global shared memory accessible to all of the multiple tasks within the parallel job. At least two of the tasks within the parallel job allocate global address spaces including a same effective address. | 06-18-2009 |
20090157996 | Method, System and Program Product for Allocating a Global Shared Memory - A method of operating a data processing system includes each of multiple tasks within a parallel job executing on multiple nodes of the data processing system issuing a system call to request allocation of backing storage in physical memory for global shared memory accessible to all of the multiple tasks within the parallel job, where the global shared memory is in a global address space defined by a range of effective addresses. Each task among the multiple tasks receives an indication that the allocation requested by the system call was successful only if the global address space for that task was previously reserved and backing storage for the global shared memory has not already been allocated. | 06-18-2009 |
20090198762 | Mechanism to Provide Reliability Through Packet Drop Detection - A method and a data processing system for completing checkpoint processing of a distributed job with local tasks communicating with other remote tasks via a host fabric interface (HFI) and assigned HFI window. Each HFI window has a send count and a receive count, which tracks GSM messages that are sent from and received at the HFI window. When a checkpoint is initiated by a master task, each local task forwards the send count and the receive count to the master task. The master task sums the respective counts and then compares the totals to each other. When the send count total is equal to the receive count total, the tasks are permitted to continue processing. However, when the send count total is not equal to the receive count total, the master task notifies each task of the job to rollback to a previous checkpoint or kill the job execution. | 08-06-2009 |
20090198891 | Issuing Global Shared Memory Operations Via Direct Cache Injection to a Host Fabric Interface - A data processing system enables global shared memory (GSM) operations across multiple nodes with a distributed EA-to-RA mapping of physical memory. Each node has a host fabric interface (HFI), which includes HFI windows that are assigned to at most one locally-executing task of a parallel job. The tasks perform parallel job execution, but map only a portion of the effective addresses (EAs) of the global address space to the local, real memory of the task's respective node. The HFI window tags all outgoing GSM operations (of the local task) with the job ID, and embeds the target node and HFI window IDs of the node at which the EA is memory mapped. The HFI window also enables processing of received GSM operations with valid EAs that are homed to the local real memory of the receiving node, while preventing processing of other received operations without a valid EA-to-RA local mapping. | 08-06-2009 |
20090198918 | Host Fabric Interface (HFI) to Perform Global Shared Memory (GSM) Operations - A data processing system enables global shared memory (GSM) operations across multiple nodes with a distributed EA-to-RA mapping of physical memory. Each node has a host fabric interface (HFI), which includes HFI windows that are assigned to at most one locally-executing task of a parallel job. The tasks perform parallel job execution, but map only a portion of the effective addresses (EAs) of the global address space to the local, real memory of the task's respective node. The HFI window tags all outgoing GSM operations (of the local task) with the job ID, and embeds the target node and HFI window IDs of the node at which the EA is memory mapped. The HFI window also enables processing of received GSM operations with valid EAs that are homed to the local real memory of the receiving node, while preventing processing of other received operations without a valid EA-to-RA local mapping. | 08-06-2009 |
20090198956 | System and Method for Data Processing Using a Low-Cost Two-Tier Full-Graph Interconnect Architecture - A system and method are provided for implementing a two-tier full-graph interconnect architecture. In order to implement a two-tier full-graph interconnect architecture, a plurality of processors are coupled to one another to create a plurality of supernodes. Then, the plurality of supernodes are coupled together to create the two-tier full-graph interconnect architecture. Data is then transmitted from one processor to another within the two-tier full-graph interconnect architecture based on an addressing scheme that specifies at least a supernode and a processor chip identifier associated with a target processor to which the data is to be transmitted. | 08-06-2009 |
20090199046 | Mechanism to Perform Debugging of Global Shared Memory (GSM) Operations - A host fabric interface (HFI) enables debugging of global shared memory (GSM) operations received at a local node from a network fabric. The local node has a memory management unit (MMU), which provides an effective address to real address (EA-to-RA) translation table that is utilized by the HFI to evaluate when EAs of GSM operations/data from a received GSM packet is memory-mapped to RAs of the local memory. The HFI retrieves the EA associated with a GSM operation/data within a received GSM packet. The HFI forwards the EA to the MMU, which determines when the EA is mapped to RAs within the local memory for the local task. The HFI processing logic enables processing of the GSM packet only when the EA of the GSM operation/data within the GSM packet is an EA that has a local RA translation. Non-matching EAs result in an error condition that requires debugging. | 08-06-2009 |
20090199182 | Notification by Task of Completion of GSM Operations at Target Node - A method for providing global notification of completion of a global shared memory (GSM) operation during processing by a target task executing at a target node of a distributed system. The distributed system has at least one other node on which an initiating task that generated the GSM operation is homed. The target task receives the GSM operation from the initiating task, via a host fabric interface (HFI) window assigned to the target task. The task initiates execution of the GSM operation on the target node. The task detects completion of the execution of the GSM operation on the target node, and issues a global notification to at least the initiating task. The global notification indicates the completion of the execution of the GSM operation to one or more tasks of a single job distributed across multiple processing nodes. | 08-06-2009 |
20090199191 | Notification to Task of Completion of GSM Operations by Initiator Node - In a global shared memory (GSM) environment, a method provides local notification of completion of a global shared memory (GSM) operation processed by a first task executing at a local node of the distributed system. The system includes multiple nodes on which different tasks of a single job execute and perform GSM operations that are received from a second task via a via host fabric interface (HFI) and associated HFR window assigned to the first tasks. The local task initiates execution of a GSM operation on the local node. The task then monitors for and detects a completion of the execution of the GSM operation on the local node. When the task detects completion of the execution of the GSM operation, the task issues an internal notification to inform the locally-executing tasks of the completion of the GSM operation. | 08-06-2009 |
20090199194 | Mechanism to Prevent Illegal Access to Task Address Space by Unauthorized Tasks - A method and data processing system for tracking global shared memory (GSM) operations to and from a local node configured with a host fabric interface (HFI) coupled to a network fabric. During task/job initialization, the system OS assigns HFI window(s) to handle the GSM packet generation and GSM packet receipt and processing for each local task. HFI processing logic automatically tags each GSM packet generated by the HFI window with a global job identifier (ID) of the job to which the local task is affiliated. The job ID is embedded within each GSM packet placed on the network fabric. On receipt of a GSM packet from the network fabric, the HFI logic retrieves the embedded job ID and compares the embedded job ID with the ID within the HFI window(s). GSM packets are forwarded to an HFI window only when the embedded job ID matches the HFI window's job ID. | 08-06-2009 |
20090199195 | Generating and Issuing Global Shared Memory Operations Via a Send FIFO - A method for issuing global shared memory (GSM) operations from an originating task on a first node coupled to a network fabric of a distributed network via a host fabric interface (HFI). The originating task generates a GSM command within an effective address (EA) space. The task then places the GSM command within a send FIFO. The send FIFO is a portion of real memory having real addresses (RA) that are memory mapped to EAs of a globally executing job. The originating task maintains a local EA-to-RA mapping of only a portion of the real address space of the globally executing job. The task enables the HFI to retrieve the GSM command from the send FIFO into an HFI window allocated to the originating task. The HFI window generates a corresponding GSM packet containing GSM operations and/or data, and the HFI window issues the GSM packet to the network fabric. | 08-06-2009 |
20090199200 | Mechanisms to Order Global Shared Memory Operations - A method and data processing system for performing fence operations within a global shared memory (GSM) environment having a local task executing on a processor and providing GSM commands for processing by a host fabric interface (HFI) window that is allocated to the task. The HFI window has one or more registers for use during local fence operations. A first register tracks a first count of task-issued GSM commands, and a second register tracks a second count of GSM operations being processed by the HFI. The processing logic detects a locally-issued fence operation, and responds by performing a series of operations, including: automatically stopping the task from issuing additional GSM commands; monitoring for completion of all the task-issued GSM commands at the HFI; and triggering a resumption of issuance of GSM commands by the task when the completion of all previous task-issued GSM commands is registered by the HFI. | 08-06-2009 |
20090199209 | Mechanism for Guaranteeing Delivery of Multi-Packet GSM Message - A target task ensures complete delivery of a global shared memory (GSM) message from an originating task to the target task. The target task's HFI receives a first of multiple GSM packets generated from a single GSM message sent from the originating task. The HFI logic assigns a sequence number and corresponding tuple to track receipt of the complete GSM message. The sequence number is unique relative to other sequence numbers assigned to GSM messages that have not been completely received from the initiating task. The HFI updates a count value within the tuple, which comprises the sequence number and the count value for the first GSM packet and for each subsequent GSM packet received for the GSM message. The HFI determines when receipt of the GSM message is complete by comparing the count value with a count total retrieved from the packet header. | 08-06-2009 |
20100115204 | NON-UNIFORM CACHE ARCHITECTURE (NUCA) - In one embodiment, a cache memory includes a cache array including a plurality of entries for caching cache lines of data, where the plurality of entries are distributed between a first region implemented in a first memory technology and a second region implemented in a second memory technology. The cache memory further includes a cache directory of the contents of the cache array and a cache controller that controls operation of the cache memory. | 05-06-2010 |
20100268788 | Remote Asynchronous Data Mover - A distributed data processing system executes multiple tasks within a parallel job, including a first local task on a local node and at least one task executing on a remote node, with a remote memory having real address (RA) locations mapped to one or more of the source effective addresses (EA) and destination EA of a data move operation initiated by a task executing on the local node. On initiation of the data move operation, remote asynchronous data move (RADM) logic identifies that the operation moves data to/from a first EA that is memory mapped to an RA of the remote memory. The local processor/RADM logic initiates a RADM operation that moves a copy of the data directly from/to the first remote memory by completing the RADM operation using the network interface cards (NICs) of the source and destination processing nodes, determined by accessing a data center for the node IDs of remote memory. | 10-21-2010 |
20100268886 | SPECIFYING AN ACCESS HINT FOR PREFETCHING PARTIAL CACHE BLOCK DATA IN A CACHE HIERARCHY - A system and method for specifying an access hint for prefetching only a subsection of cache block data, for more efficient system interconnect usage by the processor core. A processing unit receives a data cache block touch (DCBT) instruction containing an access hint and identifying a specific size portion of data to be prefetched. Both the access hint and a value corresponding to an amount of data to be prefetched are contained in separate subfields of the DCBT instruction. In response to detecting that the code point is set to a specific value, only the specific size of data identified in a sub-field of the DCBT and addressed in the DCBT instruction is prefetched into an entry in the lower level cache. | 10-21-2010 |
20110004875 | Method and System for Performance Isolation in Virtualized Environments - A method, a system, an apparatus, and a computer program product for allocating resources of one or more shared devices to one or more partitions of a virtualization environment within a data processing system. At least one user defined resource assignment is received for one or more devices associated with the data processing system. One or more registers, associated with the one or more partitions are dynamically set to execute the at least one resource assignment, whereby the at least one resource assignment enables a user defined quantitative measure (number and/or percentage) of devices to operate when the one or more transactions are executed via the partition. The system enables the one or more devices to execute one or more transactions at a bandwidth/capacity that is less than or equal to the user defined resource assignment and minimizes performance interference among partitions. | 01-06-2011 |
20110022773 | Fine Grained Cache Allocation - A mechanism is provided in a virtual machine monitor for fine grained cache allocation in a shared cache. The mechanism partitions a cache tag into a most significant bit (MSB) portion and a least significant bit (LSB) portion. The MSB portion of the tags is shared among the cache lines in a set. The LSB portion of the tags is private, one per cache line. The mechanism allows software to set the MSB portion of tags in a cache to allocate sets of cache lines. The cache controller determines whether a cache line is locked based on the MSB portion of the tag. | 01-27-2011 |
20110072214 | Read and Write Aware Cache - A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement policy. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is placed in one of the closer banks. The size ratio between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy. | 03-24-2011 |
20110238946 | Data Reorganization through Hardware-Supported Intermediate Addresses - A virtual address scheme for improving performance and efficiency of memory accesses of sparsely-stored data items in a cached memory system is disclosed. In a preferred embodiment of the present invention, a special address translation unit is used to translate sets of non-contiguous addresses in real memory into contiguous blocks of addresses in an “intermediate address space.” This intermediate address space is a fictitious or “virtual” address space, but is distinguishable from the virtual address space visible to application programs, and in user-level memory operations, effective addresses seen/manipulated by application programs are translated into intermediate addresses by an additional address translation unit for memory caching purposes. This scheme allows non-contiguous data items in memory to be assembled into contiguous cache lines for more efficient caching/access (due to the perceived spatial proximity of the data from the perspective of the processor). | 09-29-2011 |
20120195591 | DUAL NETWORK TYPES SOLUTION FOR COMPUTER INTERCONNECTS - A computing system includes: a plurality of tightly coupled processing nodes; a plurality of circuit switched networks using a circuit switching mode, interconnecting the processing nodes, and handling data transfers that meet one or more criteria; and a plurality of electronic packet switched networks, also interconnecting the processing nodes, handling data transfers that do meet the at least one criteria. The circuit switched networks and the electronic packet switched networks operate simultaneously. | 08-02-2012 |
20120198459 | ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR - A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements. | 08-02-2012 |
20120266180 | Performing Setup Operations for Receiving Different Amounts of Data While Processors are Performing Message Passing Interface Tasks - A system and method are provided for performing setup operations for receiving a different amount of data while processors are performing message passing interface (MPI) tasks. Mechanisms for adjusting the balance of processing workloads of the processors are provided so as to minimize wait periods for waiting for all of the processors to call a synchronization operation. An MPI load balancing controller maintains a history that provides a profile of the tasks with regard to their calls to synchronization operations. From this information, it can be determined which processors should have their processing loads lightened and which processors are able to handle additional processing loads without significantly negatively affecting the overall operation of the parallel execution system. As a result, setup operations may be performed while processors are performing MPI tasks to prepare for receiving different sized portions of data in a subsequent computation cycle based on the history. | 10-18-2012 |
20120311265 | Read and Write Aware Cache - A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement polity. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is place in one of the closer banks. The size ration between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy. | 12-06-2012 |
20130041882 | Technology for web site crawling, including action sequences for selecting non-hypertext-link parameters - A web site page has a reference for providing an address for a next page. The web site is crawled by the crawler program, which parses the reference from one of the web pages and sends the reference to an applet running in the browser. The address for the next page is determined by the browser responsive to the reference and is sent to the crawler. | 02-14-2013 |
20140149382 | Technology for Web Site Crawling - A web site page has a reference for providing an address for a next page. The web site is crawled by a crawler program, which parses the reference from one of the web pages and sends the reference to an applet running in a browser. The address for the next page is determined by the browser responsive to the reference and is sent to the crawler. The crawler selects non-hypertext-link parameters from the web page of the web site server by performing a programmed action sequence, including selecting items from lists of the web page in a particular sequence. The crawler sends the applet running in the browser, for the query to the web server for the next page referenced by the one web page, the selected parameters and a context arising from the particular sequence. | 05-29-2014 |
20150066510 | VARIABLE-DEPTH AUDIO PRESENTATION OF TEXTUAL INFORMATION - A respective sequence of tracks of Internet content of common subject matter is queued to each of a plurality of stations, where each of the tracks of Internet content resides on a respective Internet resource in textual form. In response to receiving a sample input, snippets of each of multiple tracks queued to a selected station among the plurality of stations is transmitted for audible presentation as synthesized human speech, where each of the snippets includes only a subset of a corresponding track. Thereafter, one or more complete tracks among the multiple tracks for which snippets were previously transmitted are transmitted for audio presentation as synthesized human speech. | 03-05-2015 |
20150067487 | INTELLIGENT AUTO COMPLETE - A method includes incorporating a local service within a user's device, accessing at least one external service via the user's device; starting a service request for the at least one external service from an input area of the user's device, causing the local service to determine at least one possible completion for the service request based on the input to the at least one external service, and providing the at least one possible completion to the input area of the user's device, wherein the local service consults at least one user context before providing the at least one possible completion to the input area of the user's device. | 03-05-2015 |
20150067491 | INTELLIGENT AUTO COMPLETE - A method includes incorporating a local service within a user's device, accessing at least one external service via the user's device; starting a service request for the at least one external service from an input area of the user's device, causing the local service to determine at least one possible completion for the service request based on the input to the at least one external service, and providing the at least one possible completion to the input area of the user's device, wherein the local service consults at least one user context before providing the at least one possible completion to the input area of the user's device. | 03-05-2015 |
20150067507 | PERSONALIZED AUDIO PRESENTATION OF TEXTUAL INFORMATION - Each of a plurality of stations has a respective sequence of tracks of Internet content of common subject matter and a respective play pointer indicating a location in the sequence of tracks. In response to a first input, the presentation mode of the station is configured in a continuous play mode in which the play pointer is progressed through the sequence of tracks queued to the station regardless of whether or not the station is presently selected for presentation. In response to a second input, the presentation mode is configured in a pause play mode in which the play pointer is progressed through the sequence of tracks queued to the station only while the station is selected for presentation to a user and otherwise pauses progression of the play pointer. The processor transmits tracks of the station and progresses the play pointer in accordance with the configured presentation mode. | 03-05-2015 |