Patent application number | Description | Published |
20080222343 | MULTIPLE ADDRESS SEQUENCE CACHE PRE-FETCHING - A method is provided for pre-fetching data into a cache memory. A first cache-line address of each of a number of data requests from at least one processor is stored. A second cache-line address of a next data request from the processor is compared to the first cache-line addresses. If the second cache-line address is adjacent to one of the first cache-line addresses, data associated with a third cache-line address adjacent to the second cache-line address is pre-fetched into the cache memory, if not already present in the cache memory. | 09-11-2008 |
20080229009 | SYSTEMS AND METHODS FOR PUSHING DATA - A system for pushing data, the system includes a source node that stores a coherent copy of a block of data. The system also includes a push engine configured to determine a next consumer of the block of data. The determination being made in the absence oft he push engine detecting a request for the block of data from the next consumer. The push engine causes the source node to push the block of data to a memory associated with the next consumer to reduce latency of the next consumer accessing the block of data. | 09-18-2008 |
20090031087 | MASK USABLE FOR SNOOP REQUESTS - A system comprises a plurality of cache agents, a computing entity coupled to the cache agents, and a programmable mask accessible to the computing entity. The programmable mask is indicative of, for at least one memory address, those cache agents that can receive a snoop request associated with a memory address. Based on the mask, the computing entity transmits snoop requests, associated with the memory address, to only those cache agents identified by the mask as cache agents that can receive a snoop request associated with the memory address. | 01-29-2009 |
20090037162 | DATACENTER WORKLOAD MIGRATION - A method is provided for evaluating workload migration from a target computer in a datacenter. The method includes tracking the number of power cycles occurring for a plurality of computers located within the datacenter and generating power cycling information as a result of the tracking. The method further includes determining whether to power cycle the target computer based on the power cycling information. | 02-05-2009 |
20090037164 | DATACENTER WORKLOAD EVALUATION - A method is provided for evaluating workload consolidation on a computer located in a datacenter. The method comprises inflating a balloon workload on a first computer that simulates a consolidation workload of a workload originating on the first computer and a workload originating on a second computer. The method further comprises evaluating the quality of service on the first computer's workload during the inflating and transferring the workload originating on either the first or the second computer to the other of the first or second computer if the evaluating the quality of service remains above a threshold. | 02-05-2009 |
20090210628 | Computer Cache System With Stratified Replacement - Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction. | 08-20-2009 |
20100192158 | Modeling Computer System Throughput - A method of determining an estimated data throughput capacity for a computer system includes the steps of creating a first model of data throughput of a central processing subsystem in the computer system as a function of latency of a memory subsystem of the computer system; creating a second model of the latency in the memory subsystem as a function of bandwidth demand of the memory subsystem; and finding a point of intersection of the first and second models. The point of intersection corresponds to a possible operating point for said computer system. | 07-29-2010 |
20100235562 | SWITCH MODULE BASED NON-VOLATILE MEMORY IN A SERVER - A switch module having shared memory that is allocated to other blade servers. A memory controller partitions and enables access to partitions of the shared memory by requesting blade servers. | 09-16-2010 |
20100250877 | Method and system for moving active virtual partitions between computers - Embodiments of the present invention are directed to enhancing VPAR monitors to allow an active VPAR to be moved from one machine to another, as well as to enhancing virtual-machine monitors to move active VPARs from one machine to another. Because traditional VPAR monitors lack access to many computational resources and to executing-operating-system state, VPAR movement is carried out primarily by specialized routines executing within active VPARs, unlike the movement of guest operating systems between machines carried out by virtual-machine-monitor routines. | 09-30-2010 |
20120221794 | Computer Cache System With Stratified Replacement - Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction. | 08-30-2012 |
20120221798 | Computer Cache System With Stratified Replacement - Methods for selecting a line to evict from a data storage system are provided. A computer system implementing a method for selecting a line to evict from a data storage system is also provided. The methods include selecting an uncached class line for eviction prior to selecting a cached class line for eviction. | 08-30-2012 |
20120311267 | EXTERNAL CACHE OPERATION BASED ON CLEAN CASTOUT MESSAGES - A processor transmits clean castout messages indicating that a cache line is not dirty and is no longer being stored by a lowest level cache of the processor. An external cache receives the clean castout messages and manages cache lines based in part on the clean castout messages. | 12-06-2012 |
20130205169 | MULTIPLE PROCESSING ELEMENTS - A first processing element can run within a first operating range. A second processing element can run within a second operating range. A third processing element can be activated if the second processing element fails or can be refrained from being run unless the first or second processing element fails. | 08-08-2013 |
20130232124 | DEDUPLICATING A FILE SYSTEM - A storage node receives a file. The storage node determines whether the file is stored on the storage node by comparing a hash value computed for content of the received file to hash values for content stored on the storage node. The storage node transfers a name and address of the file to a directory node. | 09-05-2013 |
20140089726 | DETERMINING WHETHER A RIGHT TO USE MEMORY MODULES IN A RELIABILITY MODE HAS BEEN ACQUIRED - Examples disclosed herein relate to determining whether a right to use memory modules in a reliability mode has been acquired. Examples include determining whether the right to use a plurality of memory modules in a reliability mode has been acquired, if a performance mode is selected for operation of the plurality of memory modules. | 03-27-2014 |
20140143503 | CACHE AND METHOD FOR CACHE BYPASS FUNCTIONALITY - A cache is provided for operatively coupling a processor with a main memory. The cache includes a cache memory and a cache controller operatively coupled with the cache memory. The cache controller is configured to receive memory requests to be satisfied by the cache memory or the main memory. In addition, the cache controller is configured to process cache activity information to cause at least one of the memory requests to bypass the cache memory. | 05-22-2014 |
20150052293 | HIDDEN CORE TO FETCH DATA - A computing device includes a home node controller to couple a home processor socket to the computing device. The home processor socket includes a home core hidden from the computing device and the home core fetches data to a home cache of the home processor socket. The computing device includes a source processor socket including a source core to request for data and the home node controller forwards requested data from the home cache to the source core if the requested data is included on the home cache. | 02-19-2015 |
20150074316 | REFLECTIVE MEMORY BRIDGE FOR EXTERNAL COMPUTING NODES - In at least some examples, a computing node includes a processor and a local memory coupled to the processor. The computing node also includes a reflective memory bridge coupled to the processor. The reflective memory bridge maps to an incoming region of the local memory assigned to at least one external computing node and maps to an outgoing region of the local memory assigned to at least one external computing node. | 03-12-2015 |