Patent application number | Description | Published |
20130117513 | MEMORY QUEUE HANDLING TECHNIQUES FOR REDUCING IMPACT OF HIGH LATENCY MEMORY OPERATIONS - Techniques for handling queuing of memory accesses prevent passing excessive requests that implicate a region of memory subject to a high latency memory operation, such as a memory refresh operation, memory scrubbing or an internal bus calibration event, to a re-order queue of a memory controller. The memory controller includes a queue for storing pending memory access requests, a re-order queue for receiving the requests, and a control logic implementing a queue controller that determines if there is a collision between a received request and an ongoing high-latency memory operation. If there is a collision, then transfer of the request to the re-order queue may be rejected outright, or a count of existing queued operations that collide with the high latency operation may be used to determine if queuing the new request will exceed a threshold number of such operations. | 05-09-2013 |
20130138878 | Method for Scheduling Memory Refresh Operations Including Power States - A method for performing refresh operations on a rank of memory devices is disclosed. After the completion of a memory operation, a determination is made whether or not a refresh backlog count value is less than a predetermined value and the rank of memory devices is being powered down. If the refresh backlog count value is less than the predetermined value and the rank of memory devices is being powered down, an Idle Count threshold value is set to a maximum value such that a refresh operation will be performed after a maximum delay time. If the refresh backlog count value is not less than the predetermined value or the rank of memory devices is not in a powered down state, the Idle Count threshold value is set based on the slope of an Idle Delay Function such that a refresh operation will be performed accordingly. | 05-30-2013 |
20130151777 | Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Hit Rate - A mechanism is provided for dynamic cache allocation using a cache hit rate. A first cache hit rate is monitored in a first subset utilizing a first allocation policy of N sets of a lower level cache. A second cache hit rate is also monitored in a second subset utilizing a second allocation policy different from the first allocation policy of the N sets of the lower level cache. A periodic comparison of the first cache hit rate to the second cache hit rate is made to identify a third allocation policy for a third subset of the N-sets of the lower level cache. The third allocation policy for the third subset is then periodically adjusted to at least one of the first allocation policy or the second allocation policy based on the comparison of the first cache hit rate to the second cache hit rate. | 06-13-2013 |
20130151778 | Dynamic Inclusive Policy in a Hybrid Cache Hierarchy Using Bandwidth - A mechanism is provided for dynamic cache allocation using bandwidth. A bandwidth between a higher level cache and a lower level cache is monitored. Responsive to bandwidth usage between the higher level cache and the lower level cache being below a predetermined low bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a first allocation policy. Responsive to bandwidth usage between the higher level cache and the lower level cache being above a predetermined high bandwidth threshold, the higher level cache and the lower level cache are set to operate in accordance with a second allocation policy. | 06-13-2013 |
20130151779 | Weighted History Allocation Predictor Algorithm in a Hybrid Cache - A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted. | 06-13-2013 |
20130151780 | Weighted History Allocation Predictor Algorithm in a Hybrid Cache - A mechanism is provided for weighted history allocation prediction. For each member in a plurality of members in a lower level cache, an associated reference counter is initialized to an initial value based on an operation type that caused data to be allocated to a member location of the member. For each access to the member in the lower level cache, the associated reference counter is incremented. Responsive to a new allocation of data to the lower level cache and responsive to the new allocation of data requiring the victimization of another member in the lower level cache, a member of the lower level cache is identified that has a lowest reference count value in its associated reference counter. The member with the lowest reference count value in its associated reference counter is then evicted. | 06-13-2013 |
20130173858 | Method for Scheduling Memory Refresh Operations Including Power States - A method for performing refresh operations on a rank of memory devices is disclosed. After the completion of a memory operation, a determination is made whether or not a refresh backlog count value is less than a predetermined value and the rank of memory devices is being powered down. If the refresh backlog count value is less than the predetermined value and the rank of memory devices is being powered down, an Idle Count threshold value is set to a maximum value such that a refresh operation will be performed after a maximum delay time. If the refresh backlog count value is not less than the predetermined value or the rank of memory devices is not in a powered down state, the Idle Count threshold value is set based on the slope of an Idle Delay Function such that a refresh operation will be performed accordingly. | 07-04-2013 |
20130212330 | MEMORY RECORDER QUEUE BIASING PRECEDING HIGH LATENCY OPERATIONS - A memory system and data processing system for controlling memory refresh operations in dynamic random access memories. The memory controller comprises logic that: tracks a time remaining before a scheduled time for performing a high priority, high latency operation a first memory rank of the memory system; responsive to the time remaining reaching a pre-established early notification time before the schedule time for performing the high priority, high latency operation, biases the re-order queue containing memory access operations targeting the plurality of ranks to prioritize scheduling of any first memory access operations that target the first memory rank. The logic further: schedules the first memory access operations to the first memory rank for early completion relative to other memory access operations in the re-order queue that target other memory ranks; and performs the high priority, high latency operation at the first memory rank at the scheduled time. | 08-15-2013 |
20140052936 | MEMORY QUEUE HANDLING TECHNIQUES FOR REDUCING IMPACT OF HIGH-LATENCY MEMORY OPERATIONS - Techniques for handling queuing of memory accesses prevent passing excessive requests that implicate a region of memory subject to a high latency memory operation, such as a memory refresh operation, memory scrubbing or an internal bus calibration event, to a re-order queue of a memory controller. The memory controller includes a queue for storing pending memory access requests, a re-order queue for receiving the requests, and a control logic implementing a queue controller that determines if there is a collision between a received request and an ongoing high-latency memory operation. If there is a collision, then transfer of the request to the re-order queue may be rejected outright, or a count of existing queued operations that collide with the high latency operation may be used to determine if queuing the new request will exceed a threshold number of such operations. | 02-20-2014 |
20140281325 | SYNCHRONIZATION AND ORDER DETECTION IN A MEMORY SYSTEM - Embodiments relate to out-of-synchronization detection and out-of-order detection in a memory system. One aspect is a system that includes a plurality of channels, each providing communication with a memory buffer chip and a plurality of memory devices. A memory control unit is coupled to the plurality of channels. The memory control unit is configured to perform a method that includes receiving frames on two or more of the channels. The memory control unit identifies alignment logic input in each of the received frames and generates a summarized input to alignment logic for each of the channels of the received frames based on the alignment logic input. The memory control unit adjusts a timing alignment based on a skew value per channel. Each of the timing adjusted summarized inputs is compared. Based on a mismatch between at least two of the timing adjusted summarized inputs, a miscompare signal is asserted. | 09-18-2014 |
20140304537 | METHOD AND APPARATUS FOR MITIGATING EFFECTS OF MEMORY SCRUB OPERATIONS ON IDLE TIME POWER SAVINGS MODES - An approach for saving power in a memory subsystem that uses memory access idle timer to enable low power mode and memory scrub operation within computing system has been provided. The computing system determines that a memory subsystem is switched out of low power operation mode due to a memory scrub operation. In addition, the computing system bypasses the low power operation mode of an idle timer of the memory subsystem such that the memory subsystem is returned to the low power operation mode upon completion of the memory scrub operation. The computing system further sets a scrub flag of the memory subsystem to a high state, and clears the scrub flag to a low state to track if the idle timer should be bypassed. | 10-09-2014 |
20140304566 | METHOD AND APPARATUS FOR MITIGATING EFFECTS OF MEMORY SCRUB OPERATIONS ON IDLE TIME POWER SAVINGS MODE - An approach for saving power in a memory subsystem that uses memory access idle timer to enable low power mode and memory scrub operation within computing system has been provided. The computing system determines that a memory subsystem is switched out of low power operation mode due to a memory scrub operation. In addition, the computing system bypasses the low power operation mode of an idle timer of the memory subsystem such that the memory subsystem is returned to the low power operation mode upon completion of the memory scrub operation. The computing system further sets a scrub flag of the memory subsystem to a high state, and clears the scrub flag to a low state to track if the idle timer should be bypassed. | 10-09-2014 |
20140310477 | MODIFICATION OF PREFETCH DEPTH BASED ON HIGH LATENCY EVENT - A prefetch stream is established in a prefetch unit of a memory controller for a system memory at a lowest level of a volatile memory hierarchy of the data processing system based on a memory access request received from a processor core. The memory controller receives an indication of an upcoming high latency event affecting access to the system memory. In response to the indication, the memory controller temporarily increases a prefetch depth of the prefetch stream with respect to the system memory and issues, to the system memory, a plurality of prefetch requests in accordance with the temporarily increased prefetch depth in advance of the upcoming high latency event | 10-16-2014 |
20140310478 | MODIFICATION OF PREFETCH DEPTH BASED ON HIGH LATENCY EVENT - A prefetch stream is established in a prefetch unit of a memory controller for a system memory at a lowest level of a volatile memory hierarchy of the data processing system based on a memory access request received from a processor core. The memory controller receives an indication of an upcoming high latency event affecting access to the system memory. In response to the indication, the memory controller temporarily increases a prefetch depth of the prefetch stream with respect to the system memory and issues, to the system memory, a plurality of prefetch requests in accordance with the temporarily increased prefetch depth in advance of the upcoming high latency event | 10-16-2014 |
20140372704 | LEAST-RECENTLY-USED (LRU) TO FIRST-DIRTY-MEMBER DISTANCE-MAINTAINING CACHE CLEANING SCHEDULER - A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line. | 12-18-2014 |
20140372705 | LEAST-RECENTLY-USED (LRU) TO FIRST-DIRTY-MEMBER DISTANCE-MAINTAINING CACHE CLEANING SCHEDULER - A technique for scheduling cache cleaning operations maintains a clean distance between a set of least-recently-used (LRU) clean lines and the LRU dirty (modified) line for each congruence class in the cache. The technique is generally employed at a victim cache at the highest-order level of the cache memory hierarchy, so that write-backs to system memory are scheduled to avoid having to generate a write-back in response to a cache miss in the next lower-order level of the cache memory hierarchy. The clean distance can be determined by counting all of the LRU clean lines in each congruence class that have a reference count that is less than or equal to the reference count of the LRU dirty line. | 12-18-2014 |
20140380095 | MEMORY UNCORRECTABLE ERROR HANDLING TECHNIQUE FOR REDUCING THE IMPACT OF NOISE - Techniques for handling uncorrectable errors occurring during memory accesses reduce the likelihood of mis-correction of errors due to the presence of noise. When an uncorrectable memory error is detected in response to an access to a memory device, a memory controller managing the interface to the memory halts issuing of access requests to the memory device until a predetermined time period has elapsed. In-flight memory requests are marked for retry, and responses to pending request are flushed. A calibration command may be issued after the predetermined time period has elapsed. After the predetermined time period has elapsed and any calibration performed, the requests marked for retry are issued to the memory device. | 12-25-2014 |
20140380096 | MEMORY UNCORRECTABLE ERROR HANDLING TECHNIQUE FOR REDUCING THE IMPACT OF NOISE - Techniques for handling uncorrectable errors occurring during memory accesses reduce the likelihood of mis-correction of errors due to the presence of noise. When an uncorrectable memory error is detected in response to an access to a memory device, a memory controller managing the interface to the memory halts issuing of access requests to the memory device until a predetermined time period has elapsed. In-flight memory requests are marked for retry, and responses to pending request are flushed. A calibration command may be issued after the predetermined time period has elapsed. After the predetermined time period has elapsed and any calibration performed, the requests marked for retry are issued to the memory device. | 12-25-2014 |