Class / Patent application number | Description | Number of patent applications / Date published |
711169000 | Memory access pipelining | 27 |
20080209152 | Control of metastability in the pipelined data processing apparatus - A method and integrated circuit for accessing data in a pipelined data processing apparatus in which the operating conditions of the pipelined data processing apparatus are such that metastable signals may occur on at least the boundaries of the pipelined stages is disclosed. The method comprises the steps of: receiving an indication that an instruction is to be processed by the pipelined data processing apparatus; generating a memory access prediction signal, the memory access prediction signal having a value indicative of whether or not the instruction is likely to cause a read access from a memory; generating a predicted memory access control value from the memory access prediction signal, the predicted memory access control value being generated to achieve and maintain a valid logic level for at least a sampling period thereby preventing any metastability in the predicted memory access control value; and in the event that the predicted memory access control value indicates that a read access is likely to occur, causing a read access to be initiated from the memory. Through this approach, an indication that an instruction is to be processed by the pipelined data processing apparatus is received and a memory access prediction signal indicative of whether or not the instruction is likely to cause a read access from a memory is then generated. The predicted memory access control signal is generated in a way which prevents any metastability being present in that signal. Hence, the signals used in a read access are prevented from being metastable which removes the possibility that metastable signals are used directly in the arbitration of data accesses. Also, the metastable signals may be prevented from being propagated from stage to stage. | 08-28-2008 |
20080229044 | Pipelined buffer interconnect fabric - A method and system to transfer data from one or more data sources to one or more data sinks using a pipelined buffer interconnect fabric is described. The method comprises receiving a request for a data transfer from the data source to the data sink, assigning a first buffer and a first bus to the data source, locking the first buffer and the first bus so as to enable only the data source to transfer data to the first buffer via the first bus, receiving a signal from the data source indicating completion of data transfer to the first buffer, unlocking the first buffer and the first bus, assigning the first buffer and the first bus to the data sink, assigning a second buffer and a second bus to the data source, locking the second buffer and the second bus so as to enable only the data source to transfer data to the second buffer via the second bus and enabling the data sink to read data from the first buffer via the first bus while the data source writes to the second buffer via the second bus, thereby pipelining the data transfer from the data source to the data sink. The transfer of data from data source to data sink is controlled by programming the pipelined buffer interconnect via one or more of software, control registers and control signals. | 09-18-2008 |
20080282050 | Methods and arrangements for controlling memory operations - In one embodiment, a method for operating a memory management system concurrently with a processing pipeline is disclosed. The memory management system can fetch and effectively load registers to reduce stalling of the pipeline because the disclosed system provides improved data retrieval as compared to traditional systems. The method can include storing a memory request limit parameter, receiving a memory retrieval request from a multi-processor system to retrieve contents of a memory location and to place the contents in a predetermined location. The method can also include determining a number of pending memory retrieval requests, and then processing a new retrieval request if the number of pending memory retrieval requests is at or below the memory request limit parameter. | 11-13-2008 |
20080282051 | Methods and arrangements for controlling results of memory retrival requests - In one embodiment, a method for operating a memory management system concurrently with a processing pipeline is disclosed. The method can include requesting data from a memory retrieval system utilizing a retrieval request, where the memory retrieval system can provide memory contents to a processing pipeline. In addition, an instruction can be processed by the processing pipeline, possibly a conditional instruction, that/which makes the retrieval request obsolete. The instruction can be associated with the identifier such that retrieve data from the memory can be associated with an instruction. The method can determine if the retrieval request is obsolete based on the results of processing of the instruction and the loading of the retrieved data into the pipeline can be forgone in response to determining that the retrieval request is obsolete. | 11-13-2008 |
20090063803 | CIRCUIT FOR INITIALIZING A PIPE LATCH UNIT IN A SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device includes a pipe latch unit having a plurality of pipe latches for latching data. An input controller controls input timing of data transmitted from data line to the pipe latch unit. An output controller controls output timing of data latched in the pipe latch unit. An initialization controller controls the input controller and the output controller to thereby initialize the pipe latch unit in response to a read/write flag signal which is activated during a write operation. | 03-05-2009 |
20100049936 | MEMORY ACCESS CONTROL DEVICE AND CONTROL METHOD THEREOF - A memory access control apparatus includes a plurality of memory access request generating modules and an arbitrator. When one of the memory access request generating modules receives a second memory access event while a memory device is performing a first memory access operation according to a first memory access request in response to a first memory access event, the memory access request generating module outputs a second memory access request corresponding to the second memory access event to the memory device after a delay time. The arbitrator is implemented for arbitrating memory access requests respectively outputted from the memory accessing request generating modules. | 02-25-2010 |
20100115221 | SYSTEM AND METHOD FOR PROCESSOR WITH PREDICTIVE MEMORY RETRIEVAL ASSIST - A system and method are described for a memory management processor which, using a table of reference addresses embedded in the object code, can open the appropriate memory pages to expedite the retrieval of information from memory referenced by instructions in the execution pipeline. A suitable compiler parses the source code and collects references to branch addresses, calls to other routines, or data references, and creates reference tables listing the addresses for these references at the beginning of each routine. These tables are received by the memory management processor as the instructions of the routine are beginning to be loaded into the execution pipeline, so that the memory management processor can begin opening memory pages where the referenced information is stored. Opening the memory pages where the referenced information is located before the instructions reach the instruction processor helps lessen memory latency delays which can greatly impede processing performance. | 05-06-2010 |
20110055510 | EFFICIENTLY IMPLEMENTING A PLURALITY OF FINITE STATE MACHINES - A method and program product for processing data by a pipeline of a single hardware-implemented virtual multiple instance finite state machine (VMI FSM). An input token of multiple input tokens is selected to enter a pipeline of the VMI FSM. The input token includes a reference to an FSM instance. In one embodiment, the reference is an InfiniBand QP number. After being received at the pipeline, a current state and context of the FSM instance are fetched from an array based on the reference and inserted into a field of the input token. A new state of the FSM instance is determined and an output token is generated. The new state and the output token are based on the current state, context, a first input value, and an availability of a resource. The new state of the first FSM instance is written to the array. | 03-03-2011 |
20110185146 | MULTIPLE ACCESS TYPE MEMORY AND METHOD OF OPERATION - A method for accessing a memory includes receiving a first address wherein the first address corresponds to a demand fetch, receiving a second address wherein the second address corresponds to a speculative prefetch, providing first data from the memory in response to the demand fetch in which the first data is accessed asynchronous to a system clock, and providing second data from the memory in response to the speculative prefetch in which the second data is accessed synchronous to the system clock. The memory may include a plurality of pipeline stages in which providing the first data in response to the demand fetch is performed such that each pipeline stage is self-timed independent of the system clock and providing the second data in response to the speculative prefetch is performed such that each pipeline stage is timed based on the system clock to be synchronous with the system clock. | 07-28-2011 |
20110238941 | SCHEDULING MEMORY ACCESS REQUESTS USING PREDICTED MEMORY TIMING AND STATE INFORMATION - A data processing system employs an improved arbitration process in selecting pending memory access requests received from the one or more processor cores for servicing by the memory. The arbitration process uses memory timing and state information pertaining both to memory access requests already submitted to the memory for servicing and to the pending memory access requests which have not yet been selected for servicing by the memory. The memory timing and state information may be predicted memory timing and state information; that is, the component of the data processing system that implements the improved scheduling algorithm may not be able to determine the exact point in time at which a memory controller initiates a memory access for a corresponding memory access request and thus the component maintains information that estimates or otherwise predicts the particular state of the memory at any given time. | 09-29-2011 |
20120089801 | SYSTEM FOR CONTROLLING MEMORY ACCESSES TO MEMORY MODULES HAVING A MEMORY HUB ARCHITECTURE - A computer system includes a memory hub controller coupled to a plurality of memory modules. The memory hub controller includes a memory request queue that couples memory requests and corresponding request identifier to the memory modules. Each of the memory modules accesses memory devices based on the memory requests and generates response status signals from the request identifier when the corresponding memory request is serviced. These response status signals are coupled from the memory modules to the memory hub controller along with or separate from any read data. The memory hub controller uses the response status signal to control the coupling of memory requests to the memory modules and thereby control the number of outstanding memory requests in each of the memory modules. | 04-12-2012 |
20120246435 | STORAGE SYSTEM EXPORTING INTERNAL STORAGE RULES - A data storage method includes, in a memory controller that accepts memory access commands from a host for execution in one or more memory units, holding a definition of a policy to be applied by the memory controller in the execution of the memory access commands in the memory units. The policy is reported from the memory controller to the host so as to cause the host to format memory access commands based on the reported policy. | 09-27-2012 |
20120278583 | ADAPTIVELY TIME-MULTIPLEXING MEMORY REFERENCES FROM MULTIPLE PROCESSOR CORES - The disclosed embodiments relate to a system for processing memory references received from multiple processor cores. During operation, the system monitors the memory references to determine whether memory references from different processor cores are interfering with each other as the memory references are processed by a memory system. If memory references from different processor cores are interfering with each other, the system time-multiplexes the processing of memory references between processor cores, so that a block of consecutive memory references from a given processor core is processed by the memory system before memory references from other processor cores are processed. | 11-01-2012 |
20130124814 | INCREASING MEMORY CAPACITY IN POWER-CONSTRAINED SYSTEMS - A system, and computer program product for increasing a capacity of a memory are provided in the illustrative embodiments. Using an application executing using a processor wherein the memory includes a set of ranks, the memory is configured to form a cold tier and a hot tier, the cold tier including a first subset of ranks from the set of ranks in the memory, and the hot tier including a second subset of ranks from the set of ranks in the memory. A determination is made whether a page to which a memory access request is directed is located in the cold tier in the memory. In response to the page being located in the cold tier of the memory, the processing of the memory access request is throttled by processing the memory access request with a delay. | 05-16-2013 |
20130283002 | Process Variability Tolerant Programmable Memory Controller for a Pipelined Memory System - In an embodiment of the invention, an integrated circuit includes a pipelined memory array and a memory control circuit. The pipelined memory array contains a plurality of memory banks. Based partially on the read access time information of a memory bank, the memory control circuit is configured to select the number of clock cycles used during read latency. | 10-24-2013 |
20140006744 | STORAGE CONTROL DEVICE, COMPUTER-READABLE RECORDING MEDIUM, AND METHOD THEREOF | 01-02-2014 |
20140089623 | COLUMN ADDRESS DECODING - Methods, memories and systems to access a memory may include generating an address during a first time period, decoding the address during the first time period, and selecting one or more cells of a buffer coupled to a memory array based, at least in part, on the decoded address, during a second time period. | 03-27-2014 |
20140095824 | SEMICONDUCTOR DEVICE AND OPERATING METHOD THEREOF - A semiconductor device comprises: a read queue configured to store one or more read requests to a semiconductor memory device; a write queue configured to store one or more write requests to the semiconductor memory device; and a dispatch block configured to determine a scheduling order of the one or more read requests and the one or more write requests and switch to the read queue or to the write queue if a request exists in a Row Hit state in the read queue or in the write queue. | 04-03-2014 |
20140095825 | SEMICONDUCTOR DEVICE AND OPERATING METHOD THEREOF - An operating method of a semiconductor device may comprise determining whether a read request is pending, setting a delay interval in accordance with a density of requests if there is no read request pending; and processing a write request after the delay interval. | 04-03-2014 |
20140258667 | Apparatus and Method for Memory Operation Bonding - A processor is configured to evaluate memory operation bonding criteria to selectively identify memory operation bonding opportunities within a memory access plan. Memory operations are combined in response to the memory operation bonding opportunities to form a revised memory access plan with accelerated memory access. | 09-11-2014 |
20150019832 | SEMICONDUCTOR DEVICE AND METHOD OF OPERATING THE SAME - A semiconductor device includes a pipeline latch unit including a plurality of write pipelines, and suitable for latching data, and a control unit suitable for controlling at least one write pipeline of the write pipelines based on an idle signal. | 01-15-2015 |
20150100749 | SHORT LOOP ATOMIC ACCESS - Methods and systems may provide for receiving a request to perform an atomic operation and adding the atomic operation to an execution pipeline of an arithmetic logic unit (ALU) for one or more pending atomic operations if the one or more pending atomic operations are associated with a memory location identified in the request. Additionally, at least a portion of the execution pipeline may bypass the memory location. In one example, adding the atomic operation to the execution pipeline includes populating a linked list with a modification associated with the atomic operation, wherein the linked list is dedicated to the memory location. | 04-09-2015 |
20150331617 | PIPELINE PLANNING FOR LOW LATENCY STORAGE SYSTEM - At least one embodiment involves a method of operating a storage front-end manager system to perform pipeline planning for a low latency storage system. The method can include: receiving a write request including payload data; storing the payload data of the write request in a staging area of the storage front-end manager system; determining a transformation pipeline based at least partly on an attribute of the write request; queuing the transformation pipeline for execution on the payload data to generate data fragments for storage; and transmitting the data fragments to a plurality of multiple-data-storage-devices enclosures after the transformation pipeline is executed. | 11-19-2015 |
20160041775 | TRACKING PIPELINED ACTIVITY DURING OFF-CORE MEMORY ACCESSES TO EVALUATE THE IMPACT OF PROCESSOR CORE FREQUENCY CHANGES - A processor system tracks, in at least one counter, a number of cycles in which at least one execution unit of at least one processor core is idle and at least one thread of the at least one processor core is waiting on at least one off-core memory access during run-time of the at least one processor core during an interval comprising multiple cycles. The processor system evaluates an expected performance impact of a frequency change within the at least one processor core based on the current run-time conditions for executing at least one operation tracked in the at least one counter during the interval. | 02-11-2016 |
20160070662 | Reordering a Sequence of Memory Accesses to Improve Pipelined Performance - Techniques are disclosed relating to reordering sequences of memory accesses. In one embodiment, a method includes storing a specified sequence of memory accesses that corresponds to a function to be performed. In this embodiment, the specified sequence of memory accesses has first memory access constraints. In this embodiment, the method further includes reordering the specified sequence of memory accesses to create a reordered sequence of memory accesses that has second, different memory access constraints. In this embodiment, the reordered sequence of memory accesses is usable to access a memory to perform the function. In some embodiments, performance estimates are determined for a plurality of reordered sequences of memory accesses, and one of the reordered sequences is selected based on the performance estimates. In some embodiments, the reordered sequence is used to compile a program usable to perform the function. | 03-10-2016 |
20160140046 | SYSTEM AND METHOD FOR PERFORMING HARDWARE PREFETCH TABLEWALKS HAVING LOWEST TABLEWALK PRIORITY - A hardware prefetch tablewalk system for a microprocessor including a tablewalk engine that is configured to perform hardware prefetch tablewalk operations without blocking software-based tablewalk operations. Tablewalk requests include a priority value, in which the tablewalk engine is configured to compare priorities of requests in which a higher priority request may terminate a current tablewalk operation. Hardware prefetch tablewalk requests having the lowest possible priority so that they do not bump higher priority tablewalk operations and are bumped by higher priority tablewalk requests. The priority values may be in the form of age values indicative of relative ages of operations being performed. The microprocessor may include a hardware prefetch engine that performs boundless hardware prefetch pattern detection that is not limited by page boundaries to provide the hardware prefetch tablewalk requests. | 05-19-2016 |
20160162200 | MEMORY CONTROLLER - A memory controller includes a scheduler that decides a processing order of a plurality of requests provided from an external device with reference to a timing parameter value for each of the requests; and a timing control circuit that adjusts the timing parameter value according to a corresponding address to access a memory device, the corresponding address being used to process a corresponding request of the plurality of requests. | 06-09-2016 |