Class / Patent application number | Description | Number of patent applications / Date published |
711213000 | Generating prefetch, look-ahead, jump, or predictive address | 19 |
20080301399 | PREFETCHING APPARATUS, PREFETCHING METHOD AND PREFETCHING PROGRAM PRODUCT - The efficient performance of prefetching of data prior to the reading of the data by a program. A prefetching apparatus, for prefetching data from a file to a buffer before the data is read by a program, includes: a history recorder, for recording a history for a plurality of data readings issued by the program while performing data reading; a prefetching generator, for generating a plurality of prefetchings that correspond to the plurality of data readings recorded in the history; a prefetching process determination unit, for determining, based on the history, the performance order for the plurality of prefetchings; and a prefetching unit, for performing, when following the determination of the performance order the program is executed, the plurality of prefetchings in the performance order. | 12-04-2008 |
20090150646 | MEMORY ARRAY SEARCH ENGINE - Systems and/or methods that facilitate a search of a memory component(s) to locate a desired logical block address (LBA) associated with a memory location in a memory component are presented. Searches to locate a desired LBA(s) in a memory component(s) associated with a processor component are offloaded and controlled by the memory component(s). A search component searches pages in the memory array to facilitate locating a page of data associated with an LBA stored in the memory component. The search component can retrieve a portion of a page of data in a block in the memory component to facilitate determining whether the page contains an LBA associated with a command based in part on command information. The search component can search pages in the memory component until a desired page is located or a predetermined number of searches is performed without locating the desired page. | 06-11-2009 |
20090158005 | CLOCK ENCODED PRE-FETCH TO ACCESS MEMORY DATA IN CLUSTERING NETWORK ENVIRONMENT - Systems and/or methods that facilitate reading data from a memory component associated with a network are presented. A pre-fetch generation component generates a pre-fetch request based in part on a received read command. To facilitate a reduction in latency associated with transmitting the read command via an interconnect network component to which the memory component is connected, the pre-fetch request is transmitted directly to the memory component bypassing a portion of the interconnect network component. The memory component specified in the pre-fetch request receives the pre-fetch request and reads the data stored therein, and can store the read data in a buffer and/or transmit the read data to the requester via the interconnect network component, even though the read command has not yet reached the memory component. The read data is verified by comparison with the read command at a convergence point. | 06-18-2009 |
20090198955 | ASYNCHRONOUS MEMORY MOVE ACROSS PHYSICAL NODES (DUAL-SIDED COMMUNICATION FOR MEMORY MOVE) - A distributed data processing system includes: (1) a first node with a processor, a first memory, and asynchronous memory mover logic; and connection mechanism that connects (2) a second node having a second memory. The processor includes processing logic for completing a cross-node asynchronous memory move (AMM) operation, wherein the processor performs a move of data in virtual address space from a first effective address to a second effective address, and the asynchronous memory mover logic completes a physical move of the data from a first memory location in the first memory having a first real address to a second memory location in the second memory having a second real address. The data is transmitted via the connection mechanism connecting the two nodes independent of the processor. | 08-06-2009 |
20090254733 | Dynamically Controlling a Prefetching Range of a Software Controlled Cache - Dynamically controlling a prefetching range of a software controlled cache is provided. A compiler analyzes source code to identify at least one of a plurality of loops that contain irregular memory references. For each irregular memory reference in the source code, the compiler determines whether the irregular memory reference is a candidate for optimization. Responsive to identifying an irregular memory reference that may be optimized, the complier determines whether the irregular memory reference is valid for prefetching. If the irregular memory reference is valid for prefetching, a store statement for an address of the irregular memory reference is inserted into the at least one loop. A runtime library call is inserted into a prefetch runtime library to dynamically prefetch the irregular memory references. Data associated with the irregular memory references are dynamically prefetched into the software controlled cache when the runtime library call is invoked. | 10-08-2009 |
20090287903 | Event address register history buffers for supporting profile-guided and dynamic optimizations - A computer processor and a method of using the computer processor take advantage of information in the event address register of the computer processor by saving information from the event address register to an event address register history buffer. Thus, the event address register history buffer includes a cluster of events associated with execution of a computer program. The cluster of events is analyzed and the computer program modified, either statically or dynamically, to eliminate or at least ameliorate the effects of such events in further execution of the computer program. | 11-19-2009 |
20090300320 | PROCESSING SYSTEM WITH LINKED-LIST BASED PREFETCH BUFFER AND METHODS FOR USE THEREWITH - A processing device includes a memory and a processor that generates a plurality of read commands for reading read data from the memory and a plurality of write commands for writing write data to the memory. A prefetch memory interface prefetches prefetch data to a prefetch buffer, retrieves the read data from the prefetch buffer when the read data is included in the prefetch buffer, and retrieves the read data from the memory when the read data is not included in the prefetch buffer, wherein the prefetch buffer is managed via a linked list. | 12-03-2009 |
20100005272 | Virtual memory window with dynamic prefetching support - Reconfigurable Systems-an-Chip (RSoCs) on the market consist of full-fledged processors and large Field-Programmable Gate Arrays (FPGAs). The latter can be used to implement the system glue logic, various peripherals, and application-specific coprocessors. Using FPGAs for application-specific coprocessors has certain speedup potentials, but it is less present in practice because of the complexity of interfacing the software application with the coprocessor. In the present application, we present a virtualisation layer consisting of an operating system extension and a hardware component. It lowers the complexity of interfacing and increases portability potentials, while it also allows the coprocessor to access the user virtual memory through a virtual memory window. The burden of moving data between processor and coprocessor is shifted from the programmer to the operating system. | 01-07-2010 |
20100180099 | APPARATUS, SYSTEM AND METHOD FOR PREFETCHING DATA IN BUS SYSTEM - A method for prefetching data in a bus system is provided. First, according to an address signal from a master, a prefetching address generator generates a prefetching address signal and transfers it to a first select circuit. In response to a signal from the master indicates that the address is related to the previous address and the control signal is identical to the previous transfer, or in response to a signal from the master indicates that the address and control signals are unrelated to the previous transfer but is matched to a hit logic, a prefetching controller directs the first select circuit to transfer the prefetching address signal to a slave. And the prefetching controller also directs a second select circuit to transfer the prefetched data which is corresponding to the prefetching address signal from the slave to a master. | 07-15-2010 |
20110078407 | DETERMINING AN END OF VALID LOG IN A LOG OF WRITE RECORDS - Provided are a method, computer program product and system for determining an end of valid log in a log of write records. Records are written to a log in a storage device in a sequential order, wherein the records include a next pointer addressing a next record in a write order and a far ahead pointer addressing a far ahead record in the write order following the record. The far ahead pointer and the next pointer in a plurality of records are used to determine an end of valid log from which to start writing further records. | 03-31-2011 |
20110173412 | DATA PROCESSING DEVICE AND MEMORY PROTECTION METHOD OF SAME - A memory protection method includes setting a memory area in at least one address setting register; setting a trap type in a trap type setting register corresponding to the address setting register; generating a trap of the trap type set in the trap type setting register in accordance with an access request to the memory area set at the address setting register; setting a size of an inaccessible area in a memory; allocating, in accordance with a memory allocation request from an application, a memory area to the application as an accessible area and an inaccessible area having the inaccessible area size right after the accessible area; setting the inaccessible area in a first address setting register and a first trap type in a first trap type setting register; and generating a memory image of the application and closing the application when a trap of the first trap type occurred. | 07-14-2011 |
20120011343 | DATA PROCESSING APPARATUS AND METHOD - A data processing apparatus includes a pre-fetch unit configured to divide and store data, a validation setting unit configured to store information regarding whether or not the data stored in the pre-fetch unit are valid, an address generation unit configured to generate an address for reading/storing the data from/in the pre-fetch unit, and a pre-fetch control unit configured to control a storage position of the data in the pre-fetch unit by using the address and information of the address generation unit and the validation setting unit. | 01-12-2012 |
20120017065 | PARALLELIZED CHECK POINTING USING MATs AND THROUGH SILICON VIAs (TSVs) - A system and method that includes a memory die, residing on a stacked memory, which is organized into a plurality of mats that include data. The system and method also includes an additional memory die, residing on the stacked memory, that is organized into an additional plurality of mats and connected to the memory die by a Through Silicon Vias (TSVs), the data to be transmitted along the TSVs. | 01-19-2012 |
20130111185 | CHAINING MOVE SPECIFICATION BLOCKS | 05-02-2013 |
20130117532 | INTERLEAVING ADDRESS MODIFICATION - An apparatus having a plurality of memory blocks and a circuit is disclosed. The circuit may be configured to (i) generate a second address by removing one or more first bits of a first address from one or more first locations defined by a first value, (ii) generate a third address by adding an offset value to the second address and (iii) generate a fourth address by inserting a selected one of a plurality of modifiers into the third address. The selected modifier may be inserted into the third address at the first locations. Each modifier is generally associated with a respective one of a plurality of buffers formed in the memory blocks. The circuit may also be configured to access the respective buffer of the fourth address. | 05-09-2013 |
20140082324 | Method and Storage Device for Using File System Data to Predict Host Device Operations - A method and storage device for using file system data to predict host device operations are disclosed. In one embodiment, a storage device is disclosed having a first memory storing data and file system metadata, a second memory, and a controller. In response to receiving a command from the host device to read a first address in the first memory, the controller reads data from the first address in the first memory and returns it to the host device. The controller predicts a second address in the first memory to be read by a subsequent read command from the host device, reads the data from the predicted second address, and stores it in the second memory. | 03-20-2014 |
20150089186 | STORE ADDRESS PREDICTION FOR MEMORY DISAMBIGUATION IN A PROCESSING DEVICE - A processing device implementing store address prediction for memory disambiguation in a processing device is disclosed. A processing device of the disclosure includes a store address predictor to predict an address for store operations that store data to a memory hierarchy. The processing device further includes a store buffer for buffering the store operations prior to completion, the store buffer to comprise the predicted address for each of the store operations. The processing device further includes a load buffer to buffer a load operation, the load operation to reference the store buffer to, based on the predicted addresses, determine whether to speculatively execute ahead of each store operation and to determine whether to speculatively forward data from one of the store operations. | 03-26-2015 |
20160085669 | DESCRIPTOR RING MANAGEMENT - A data processing system utilising a descriptor ring | 03-24-2016 |
20180024932 | TECHNIQUES FOR MEMORY ACCESS PREFETCHING USING WORKLOAD DATA | 01-25-2018 |