Class / Patent application number | Description | Number of patent applications / Date published |
711204000 | Predicting, look-ahead | 55 |
20080201552 | COMPUTER-READABLE MEDIUM STORING PROGRAM FOR CONTROLLING ARCHIVING OF ELECTRONIC DOUCUMENT, DOCUMENT MANAGEMENT SYSTEM, DOCUMENT MANAGEMENT METHOD, AND COMPUTER DATA SIGNAL - There is provided a computer-readable medium storing a program causing a computer to execute a process for controlling archiving of an electronic document, the program causing the computer to function as: a requirement memory that stores a document archive requirement for each rule; and an archive processor that judges, on the basis of the requirement memory, each document archive requirement corresponding to each rule to be applied to an electronic document to be archived, determines an archive mode which satisfies all of the judged document archive requirements, and executes a process to archive the electronic document in an archiving device in the determined archive mode. | 08-21-2008 |
20080263313 | Pretranslating Input/Output Buffers In Environments With Multiple Page Sizes - Pretranslating input/output buffers in environments with multiple page sizes that include determining a pretranslation page size for an input/output buffer under an operating system that supports more than one memory page size, identifying pretranslation page frame numbers for the buffer in dependence upon the pretranslation page size, pretranslating the pretranslation page frame numbers to physical page numbers, and storing the physical page numbers in association with the pretranslation page size. Typical embodiments also include accessing the buffer, including translating a virtual memory address in the buffer to a physical memory address in dependence upon the physical page numbers and the pretranslation page size and accessing the physical memory of the buffer at the physical memory address. | 10-23-2008 |
20080276066 | Virtual memory translation with pre-fetch prediction - A system to facilitate virtual page translation. An embodiment of the system includes a processing device, a front end unit, and address translation logic. The processing device is configured to process data of a current block of data. The front end unit is coupled to the processing device. The front end unit is configured to access the current block of data in an electronic memory device and to send the current block of data to the processor for processing. The address translation logic is coupled to the front end unit and the electronic memory device. The address translation logic is configured to pre-fetch a virtual address translation for a predicted virtual address based on a virtual address of the current block of data. Embodiments of the system increase address translation performance of computer systems including graphic rendering operations. | 11-06-2008 |
20080294867 | ARITHMETIC PROCESSOR, INFORMATION PROCESING APPARATUS AND MEMORY ACCESS METHOD IN ARITHMETIC PROCESSOR - In an information processing apparatus of this invention having a cache memory, a TLB and a TSB, a second retrieval unit retrieves a second physical address from an address translation buffer by using a second virtual address corresponding one-to-one to a first virtual address, and a prefetch controller enters a first address translation pair of the first virtual address from an address translation table into a cache memory by using a second physical address which is a result of the retrieval, thereby largely shortening the processing time of a memory access when a TLB miss occurs at the time of the memory access. | 11-27-2008 |
20090119476 | DATA MIGRATION - Data is extracted from at least one data source. The data is translated according to a metadata model and is stored in a staging data store. A migration management user interface is provided that includes a mechanism for indicating at least some of the data to be included in a migration event. The migration event is initiated based at least in part on the input received via the user interface. The at least some of the data is migrated from the staging data store to a target data store according to a hierarchy of controls. | 05-07-2009 |
20100106935 | Pretranslating Input/Output Buffers In Environments With Multiple Page Sizes - Pretranslating input/output buffers in environments with multiple page sizes that include determining a pretranslation page size for an input/output buffer under an operating system that supports more than one memory page size, identifying pretranslation page frame numbers for the buffer in dependence upon the pretranslation page size, pretranslating the pretranslation page frame numbers to physical page numbers, and storing the physical page numbers in association with the pretranslation page size. Typical embodiments also include accessing the buffer, including translating a virtual memory address in the buffer to a physical memory address in dependence upon the physical page numbers and the pretranslation page size and accessing the physical memory of the buffer at the physical memory address. | 04-29-2010 |
20100169606 | PROCESSOR AND METHOD FOR USING AN INSTRUCTION HINT TO PREVENT HARDWARE PREFETCH FROM USING CERTAIN MEMORY ACCESSES IN PREFETCH CALCULATIONS - A microprocessor includes a cache memory, a prefetch unit, and detection logic. The prefetch unit may be configured to monitor memory accesses that miss in the cache and to determine whether to prefetch one or more blocks of memory from a system memory based upon previous memory accesses. The prefetch unit may be further configured to use addresses of the memory accesses that miss to calculate each next memory block to prefetch. The detection logic may be configured to provide a notification to the prefetch unit in response to detecting a memory access instruction including a particular hint. In response to receiving the notification, the prefetch unit may be configured to inhibit using an address associated with the memory access instruction including the particular hint, when calculating subsequent memory blocks to prefetch. | 07-01-2010 |
20100199063 | METHODS AND MECHANISMS FOR PROACTIVE MEMORY MANAGEMENT - A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults. | 08-05-2010 |
20120060013 | Effective Memory Clustering to Minimize Page Fault and Optimize Memory Utilization - An embodiment of the invention provides a method for organizing data addresses within a virtual address space to reduce the number of data fetches to a cloud computing environment. More specifically, data access requests to the cloud computing environment are monitored to identifying data addresses having similar properties. Multi-dimensional clusters are created based on the monitoring to group the data addresses having similar properties. A memory page is created from a multi-dimensional cluster, wherein the creating of the memory page includes creating a cross-sectional partition from the multi-dimensional cluster. The multi-dimensional clusters and the memory page are stored in the cloud computing environment. A request for a data object in the cloud computing environment is received from a user interface. The data address corresponding to the data object is identified and mapped to the multi-dimensional cluster and/or the memory page. The memory page is transferred to the user interface. | 03-08-2012 |
20120066472 | MACROSCALAR PROCESSOR ARCHITECTURE - A macroscalar processor architecture is described herein. In one embodiment, a processor receives instructions of a program loop having a vector block and a sequence block intended to be executed after the vector block, where the processor includes multiple slices and each of the slices is capable of executing an instruction of an iteration of the program loop substantially in parallel. For each iteration of the program loop, the processor executes an instruction of the sequence block using one of the slices while executing instructions of the vector block using a remainder of the slices substantially in parallel. Other methods and apparatuses are also described. | 03-15-2012 |
20120131305 | PAGE AWARE PREFETCH MECHANISM - A processor includes a prefetch aware prefetch unit having a storage with a number of entries, and each entry corresponds to a different prefetch data stream. Each entry may be configured to store information corresponding to a page size of the prefetch data stream, along with, for example, an address corresponding to the prefetch data stream. For each entry, the prefetch unit may be configured to determine whether a prefetch of data in the data stream will cross a page boundary associated with the data stream based upon the page size information. | 05-24-2012 |
20120216008 | DYNAMIC LOOK-AHEAD EXTENT MIGRATION FOR TIERED STORAGE ARCHITECTURES - A method for migrating extents between extent pools in a tiered storage architecture maintains a data access profile for an extent over a period of time. Using the data access profile, the method generates an extent profile graph that predicts data access rates for the extent into the future. The slope of the extent profile graph is calculated and used to determine whether the extent will reach a migration threshold within a specified “look-ahead” time. If so, the method calculates a migration window that allows the extent to be migrated prior to reaching the migration threshold. In certain embodiments, the method determines the overall performance impact on the source extent pool and destination extent pool during the migration window. If the overall performance impact is below a designated impact threshold, the method migrates the extent during the migration window. | 08-23-2012 |
20120246440 | CO-STORAGE OF DATA STORAGE PAGE LINKAGE, SIZE, AND MAPPING - A logical page identity for a logical page containing data storage application data can be mapped to a physical storage page location in a storage where the data of the logical page are stored. The mapping as well as additional page data can be retained within a persistence layer accessible to the data storage application. The additional page data can include at least one of a size of the page and a next page linkage indicating a second page that follows the page in a page sequence of related pages. The retained mapping and additional page data can be retrieved from the persistence layer to initiate a page operation on the related pages, and the page operation can be executed on the related pages based on the retrieved mapping and additional page data. Related methods, systems, and articles of manufacture are also disclosed. | 09-27-2012 |
20120265962 | HIGH-PERFORMANCE SAS TARGET - A method for data storage includes, in a storage device that communicates with a host over a storage interface for executing a storage command in a memory of the storage device, estimating an expected data under-run between fetching data for the storage command from the memory and sending the data over the storage interface. A data size to be prefetched from the memory, in order to complete uninterrupted execution of the storage command, is calculated in the storage device based on the estimated data under-run. The storage command is executed in the memory while prefetching from the memory data of at least the calculated data size. | 10-18-2012 |
20130166874 | I/O CONTROLLER AND METHOD FOR OPERATING AN I/O CONTROLLER - An I/O controller, coupled to a processing unit and to a memory, includes an I/O link interface configured to receive data packets having virtual addresses; an address translation unit having an address translator to translate received virtual addresses into real addresses by translation control entries and a cache allocated to the address translator to cache a number of the translation control entries; an I/O packet processing unit for checking the data packets received at the I/O link interface and for forwarding the checked data packets to the address translation unit; and a prefetcher to forward address translation prefetch information from a data packet received to the address translation unit; the address translator configured to fetch the translation control entry for the data packet by the address translation prefetch information from the allocated cache or, if the translation control entry is not available in the allocated cache, from the memory. | 06-27-2013 |
20130198485 | METHODS OF AND APPARATUS FOR STORING DATA IN MEMORY IN DATA PROCESSING SYSTEMS - A data array | 08-01-2013 |
20130246733 | PARALLEL PROCESSING DEVICE - A parallel processing device includes a processing sequence management unit that reads commands of the command corresponding to a parallel processing start bit to the command corresponding to a parallel processing completion bit from a sequence command storage in sequence to make the sequence command storage output the commands to a first address management unit and a second address management unit, the first address management unit refers to the sequence commands read from the sequence command storage in order from the head to find the command that a first processing execution unit executes, and then instructs the first processing execution unit to execute the command, and the second address management unit refers to the sequence commands read from the sequence command storage in order from the head to find the command that a second processing execution unit executes, and then instructs the second processing execution unit to execute the command. | 09-19-2013 |
20130332699 | TARGET BUFFER ADDRESS REGION TRACKING - Embodiments relate to target buffer address region tracking. An aspect includes receiving a restart address, and comparing, by a processing circuit, the restart address to a first stored address and to a second stored address. The processing circuit determines which of the first and second stored addresses is identified as a same range and a different range to form a predicted target address range defining an address region associated with an entry in the target buffer. Based on determining that the restart address matches the first stored address, the first stored address is identified as the same range and the second stored address is identified as the different range. Based on determining that the restart address matches the second stored address, the first stored address is identified as the different range and the second stored address is identified as the same range. | 12-12-2013 |
20140115294 | MEMORY PAGE MANAGEMENT - According to one embodiment, a method for operating a memory device includes receiving a first request from a requestor, wherein the first request includes accessing data at a first memory location in a memory bank, opening a first page in the memory bank, wherein opening the first page includes loading a row including the first memory location into a buffer, the row being loaded from a row location in the memory bank and transmitting the data from the first memory location to the requestor. The method also includes determining, by a memory controller, whether to close the first page following execution of the first request based on information relating to a likelihood that a subsequent request will access the first page. | 04-24-2014 |
20140258674 | SYSTEM-ON-CHIP AND METHOD OF OPERATING THE SAME - A system on chip (SoC) includes a central processing unit (CPU), an intellectual property (IP) block, and a memory management unit (MMU). The CPU is configured to set a prefetch direction corresponding to a working set of data. The IP block is configured to process the working set of data. The MMU is configured to prefetch a next page table entry from a page table based on the prefetch direction during address translation between a virtual address of the working set of data and a physical address. | 09-11-2014 |
20140351551 | MEMORY-NETWORK PROCESSOR WITH PROGRAMMABLE OPTIMIZATIONS - Various embodiments are disclosed of a multiprocessor system with processing elements optimized for high performance and low power dissipation and an associated method of programming the processing elements. Each processing element may comprise a fetch unit and a plurality of address generator units and a plurality of pipelined datapaths. The fetch unit may be configured to receive a multi-part instruction, wherein the multi-part instruction includes a plurality of fields. A first address generator unit may be configured to perform an arithmetic operation dependent upon a first field of the plurality of fields. A second address generator unit may be configured to generate at least one address of a plurality of addresses, wherein each address is dependent upon a respective field of the plurality of fields. A parallel assembly language may be used to control the plurality of address generator units and the plurality of pipelined datapaths. | 11-27-2014 |
711205000 | Directories and tables (e.g., DLAT, TLB) | 34 |
20080270738 | Virtual address hashing - Embodiments include methods, apparatus, and systems for virtual address hashing. One embodiment evenly distributes page-table entries throughout a hash table so applications do not generate a same hash index for mapping virtual addresses to physical addresses. | 10-30-2008 |
20090043985 | ADDRESS TRANSLATION DEVICE AND METHODS - A data processing device employs a first translation look-aside buffer (TLB) to translate virtual addresses to physical addresses. If a virtual address to be translated is not located in the first TLB, the physical address is requested from a set of page tables. When the data processing device is in a hypervisor mode, a second TLB is accessed in response to the request to access the page tables. If the virtual address is located in the second TLB, the hypervisor page tables are bypassed and the second TLB provides a physical address or information to access another table in the set of page tables. By bypassing the hypervisor page tables, the time to translate an address in the hypervisor mode is reduced, thereby improving the efficiency of the data processing device. | 02-12-2009 |
20090070545 | PROCESSING SYSTEM IMPLEMENTING VARIABLE PAGE SIZE MEMORY ORGANIZATION USING A MULTIPLE PAGE PER ENTRY TRANSLATION LOOKASIDE BUFFER - A processing system includes memory management software responsive to changes in a page table to consolidate a run of contiguous page table entries into a page table entry having a larger memory page size. The memory management software determines whether the run of contiguous page table entries may be cached using the larger memory page size in an entry of a translation lookaside buffer. The translation lookaside buffer may be a MIPS-like TLB in which multiple page table entries are cached in each TLB entry. | 03-12-2009 |
20090106523 | TRANSLATION LOOK-ASIDE BUFFER WITH VARIABLE PAGE SIZES - Multiple pipelined Translation Look-aside Buffer (TLB) units are configured to compare a translation address with associated TLB entries. The TLB units operated in serial order comparing the translation address with associated TLB entries until an identified one of the TLB units produces a hit. The TLB units following the TLB unit producing the hit might be disabled. | 04-23-2009 |
20090187727 | INDEX GENERATION FOR CACHE MEMORIES - Embodiments of the present invention provide a system that generates an index for a cache memory. The system starts by receiving a request to access the cache memory, wherein the request includes address information. The system then obtains non-address information associated with the request. Next, the system generates the index using the address information and the non-address information. The system then uses the index to fulfill access the cache memory. | 07-23-2009 |
20090198950 | Techniques for Indirect Data Prefetching - A processor includes a first address translation engine, a second address translation engine, and a prefetch engine. The first address translation engine is configured to determine a first memory address of a pointer associated with a data prefetch instruction. The prefetch engine is coupled to the first translation engine and is configured to fetch content, included in a first data block (e.g., a first cache line) of a memory, at the first memory address. The second address translation engine is coupled to the prefetch engine and is configured to determine a second memory address based on the content of the memory at the first memory address. The prefetch engine is also configured to fetch (e.g., from the memory or another memory) a second data block (e.g., a second cache line) that includes data at the second memory address. | 08-06-2009 |
20090204785 | Computer with two execution modes - A computer. A processor pipeline alternately executes instructions coded for first and second different computer architectures or coded to implement first and second different processing conventions. A memory stores instructions for execution by the processor pipeline, the memory being divided into pages for management by a virtual memory manager, a single address space of the memory having first and second pages. A memory unit fetches instructions from the memory for execution by the pipeline, and fetches stored indicator elements associated with respective memory pages of the single address space from which the instructions are to be fetched. Each indicator element is designed to store an indication of which of two different computer architectures and/or execution conventions under which instruction data of the associated page are to be executed by the processor pipeline. The memory unit and/or processor pipeline recognizes an execution flow from the first page, whose associated indicator element indicates the first architecture. or execution convention, to the second page, whose associated indicator element indicates the first architecture or execution convention. In response to the recognizing, a processing mode of the processor pipeline or a storage content of the memory adapts to effect execution of instructions in the architecture and/or under the convention indicated by the indicator element corresponding to the instruction's page. | 08-13-2009 |
20100106936 | Calculator and TLB control method - A calculator includes a main TLB that stores therein a plurality of address translation pairs indicating a correspondence of a virtual address and an absolute address as a page table and a micro TLB that stores therein part of the page table stored in the main TLB. In the micro TLB, a TLB virtual address [63:13] and a TLB absolute address [46:13] are registered in a correlated manner. With such configuration, when registering an address translation pair in the micro TLB, the calculator chops the address translation pair to a page size of a first size or a fourth size to register it in the micro TLB. Upon receiving an address translation request, the calculator searches for an address corresponding to the page size of the first size or the fourth size registered in the micro TLB, so that address comparison conditions can be reduced, enabling to improve a processing performance. | 04-29-2010 |
20100122062 | Using an IOMMU to Create Memory Archetypes - In one embodiment, an input/output (I/O) memory management unit (IOMMU) comprises at least one memory and control logic coupled to the memory. The memory is configured to store translation data corresponding to one or more I/O translation tables stored in a memory system of a computer system that includes the IOMMU. The control logic is configured to translate an I/O device-generated memory request using the translation data. The translation data includes a type field indicating one or more attributes of the translation, and the control logic is configured to control the translation responsive to the type field. | 05-13-2010 |
20100161933 | Storage device with manual learning - In a particular embodiment, a system is disclosed that includes a controller to read data from and write data to a first storage medium. The controller is adapted to monitor logical block addresses (LBAs) of each read operation from the first storage medium and to selectively store files associated with the monitored LBAs that are less than a predetermined length at a second storage medium to enhance performance of applications associated with the LBAs. | 06-24-2010 |
20110264887 | Preload instruction control - A processor | 10-27-2011 |
20120131306 | Streaming Translation in Display Pipe - In an embodiment, a display pipe includes one or more translation units corresponding to images that the display pipe is reading for display. Each translation unit may be configured to prefetch translations ahead of the image data fetches, which may prevent translation misses in the display pipe (at least in most cases). The translation units may maintain translations in first-in, first-out (FIFO) fashion, and the display pipe fetch hardware may inform the translation unit when a given translation or translation is no longer needed. The translation unit may invalidate the identified translations and prefetch additional translation for virtual pages that are contiguous with the most recently prefetched virtual page. | 05-24-2012 |
20120166756 | INDEX GENERATION FOR CACHE MEMORIES - Embodiments of the present invention provide a system that generates an index for a cache memory. The system starts by receiving a request to access the cache memory, wherein the request includes address information. The system then obtains non-address information associated with the request. Next, the system generates the index using the address information and the non-address information. The system then uses the index to fulfill access the cache memory. | 06-28-2012 |
20130097402 | DATA PREFETCHING METHOD FOR DISTRIBUTED HASH TABLE DHT STORAGE SYSTEM, NODE, AND SYSTEM - Embodiments of the present disclosure provide a data prefetching method, a node, and a system. The method includes: a first storage node receives a read request sent by a client, determines a to-be-prefetched data block and a second storage node where the to-be-prefetched data block resides according to a read data block and a set to-be-prefetched data block threshold, and sends a prefetching request to the second storage node, the prefetching request includes identification information of the to-be-prefetched data block, and the identification information is used to identify the to-be-prefetched data block; and the second storage node reads the to-be-prefetched data block from a disk according to the prefetching request, and stores the to-be-prefetched data block in a local buffer, so that the client reads the to-be-prefetched data block from the local buffer of the second storage node. | 04-18-2013 |
20130151809 | ARITHMETIC PROCESSING DEVICE AND METHOD OF CONTROLLING ARITHMETIC PROCESSING DEVICE - An arithmetic processing device includes: an processing unit configured to execute threads and output a memory request including a virtual address; a buffer configured to register some of address translation pairs stored in a memory, each of the address translation pairs including a virtual address and a physical address; a controller configured to issue requests for obtaining the corresponding address translation pairs to the memory for individual threads when an address translation pair corresponding to the virtual address included in the memory request output from the processing unit is not registered in the buffer; table fetch units configured to obtain the corresponding address translation pairs from the memory for individual threads when the requests for obtaining the corresponding address translation pairs are issued; and a registration controller configured to register one of the obtained address translation pairs in the buffer. | 06-13-2013 |
20130227245 | MEMORY MANAGEMENT UNIT WITH PREFETCH ABILITY - Techniques are disclosed relating to integrated circuits that implement a virtual memory. In one embodiment, an integrated circuit is disclosed that includes a translation lookaside buffer configured to store non-prefetched translations and a translation table configured to store prefetched translations. In such an embodiment, the translation lookaside buffer and the translation table share table walk circuitry. In some embodiments, the table walk circuitry is configured to store a translation in the translation table in response to a prefetch request and without updating the translation lookaside buffer. In some embodiments, the translation lookaside buffer, the translation table, and table walk circuitry are included within a memory management unit configured to service memory requests received from a plurality of client circuits via a plurality of direct memory access (DMA) channels. | 08-29-2013 |
20130339650 | PREFETCH ADDRESS TRANSLATION USING PREFETCH BUFFER - Embodiments relate to prefetch address translation in a computer processor. An aspect includes issuing, by prefetch logic, a prefetch request comprising a virtual page address. Another aspect includes, based on the prefetch request missing the TLB and the address translation logic of the processor being busy performing a current translation request, comparing a page of the prefetch request to a page of the current translation request. Yet another aspect includes, based on the page of the prefetch request matching the page of the current translation request, storing the prefetch request in a prefetch buffer. | 12-19-2013 |
20140052954 | SYSTEM TRANSLATION LOOK-ASIDE BUFFER WITH REQUEST-BASED ALLOCATION AND PREFETCHING - A system TLB accepts translation prefetch requests from initiators. Misses generate external translation requests to a walker port. Attributes of the request such as ID, address, and class, as well as the state of the TLB affect the allocation policy of translations within multiple levels of translation tables. Translation tables are implemented with SRAM, and organized in groups. | 02-20-2014 |
20140052955 | DMA ENGINE WITH STLB PREFETCH CAPABILITIES AND TETHERED PREFETCHING - A system with a prefetch address generator coupled to a system translation look-aside buffer that comprises a translation cache. Prefetch requests are sent for page address translations for predicted future normal requests. Prefetch requests are filtered to only be issued for address translations that are unlikely to be in the translation cache. Pending prefetch requests are limited to a configurable or programmable number. Such a system is simulated from a hardware description language representation. | 02-20-2014 |
20140052956 | STLB PREFETCHING FOR A MULTI-DIMENSION ENGINE - A multi-dimension engine, connected to a system TLB, generates sequences of addresses to request page address translation prefetch requests in advance of predictable accesses to elements within data arrays. Prefetch requests are filtered to avoid redundant requests of translations to the same page. Prefetch requests run ahead of data accesses but are tethered to within a reasonable range. The number of pending prefetches are limited. A system TLB stores a number of translations, the number being relative to the dimensions of the range of elements accessed from within the data array. | 02-20-2014 |
20140075147 | TRANSACTIONAL MEMORY THAT PERFORMS AN ATOMIC LOOK-UP, ADD AND LOCK OPERATION - A transactional memory (TM) receives an Atomic Look-up, Add and Lock (ALAL) command across a bus from a client. The command includes a first value. The TM pulls a second value. The TM uses the first value to read a set of memory locations, and determines if any of the locations contains the second value. If no location contains the second value, then the TM locks a vacant location, adds the second value to the vacant location, and sends a result to the client. If a location contains the second value and it is not locked, then the TM locks the location and returns a result to the client. If a location contains the second value and it is locked, then the TM returns a result to the client. Each location has an associated data structure. Setting the lock field of a location locks access to its associated data structure. | 03-13-2014 |
20140108766 | PREFETCHING TABLEWALK ADDRESS TRANSLATIONS - A processing unit includes a translation look-aside buffer operable to store a plurality of virtual address translation entries, a prefetch buffer, and logic operable to receive a first virtual address translation associated with a first virtual memory block and a second virtual address translation associated with a second virtual memory block immediately adjacent the first virtual memory block, store the first virtual address translation in the transaction look-aside buffer, and store the second virtual address translation in the prefetch buffer. | 04-17-2014 |
20140115295 | DYNAMIC ADDRESS TRANSLATION WITH FETCH PROTECTION IN AN EMULATED ENVIRONMENT - What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated is first obtained and an initial origin address of a translation table of the hierarchy of translation tables is obtained. Based on the obtained initial origin, a segment table entry is obtained. The segment table entry is configured to contain a format control and access validity fields. If the format control and access validity fields are enabled, the segment table entry further contains an access control field, a fetch protection field, and a segment-frame absolute address. Store operations are permitted only if the access control field matches a program access key provided by any one of a Program Status Word or an operand of a program instruction being emulated. Fetch operations are permitted if the program access key associated with the virtual address is equal to the segment access control field or the fetch protection field is not enabled. | 04-24-2014 |
20140129794 | SPECULATIVE TABLEWALK PROMOTION - A method includes performing a speculative tablewalk. The method includes performing a tablewalk to determine an address translation for a speculative operation and determining whether the speculative operation has been upgraded to a non-speculative operation concurrently with performing the tablewalk. An apparatus is provided that includes a load-store unit to maintain execution operations. The load-store unit includes a tablewalker to perform a tablewalk and includes an input indicative of the operation being speculative or non-speculative as well as a state machine to determine actions performed during the tablewalk based on the input. The apparatus also includes a translation look-aside buffer. Computer readable storage devices for performing the methods and adapting a fabrication facility to manufacture the apparatus are provided. | 05-08-2014 |
20140195771 | ANTICIPATORILY LOADING A PAGE OF MEMORY - In a particular embodiment, a method of anticipatorily loading a page of memory is provided. The method may include, during execution of first program code using a first page of memory, collecting data for at least one attribute of the first page of memory, including collecting data about at least one next page of memory that interacts with the first page of memory for a historical topology attribute of the first page of memory. The method may also include, during execution of second program code using the first page of memory, determining a second page of memory to anticipatorily load based on the historical topology attribute of the first page of memory. | 07-10-2014 |
20140195772 | SYSTEM AND METHOD FOR OUT-OF-ORDER PREFETCH INSTRUCTIONS IN AN IN-ORDER PIPELINE - Apparatuses, systems, and a method for providing a processor architecture with data prefetching are described. In one embodiment, a system includes one or more processing units that include a first type of in-order pipeline to receive at least one data prefetch instruction. The one or more processing units include a second type of in-order pipeline having issues slots to receive instructions and a data prefetch queue to receive the at least one data prefetch instruction. The data prefetch queue may issue the at least one data prefetch instruction to the second type of in-order pipeline based upon one or more factors (e.g., at least one execution slot of the second type of in-order pipeline being available, priority of the data prefetch instruction). | 07-10-2014 |
20140237212 | TRACKING AND ELIMINATING BAD PREFETCHES GENERATED BY A STRIDE PREFETCHER - A method, an apparatus, and a non-transitory computer readable medium for tracking prefetches generated by a stride prefetcher are presented. Responsive to a prefetcher table entry for an address stream locking on a stride, prefetch suppression logic is updated and prefetches from the prefetcher table entry are suppressed when suppression is enabled for that prefetcher table entry. A stride is a difference between consecutive addresses in the address stream. A prefetch request is issued from the prefetcher table entry when suppression is not enabled for that prefetcher table entry. | 08-21-2014 |
20140281351 | STRIDE-BASED TRANSLATION LOOKASIDE BUFFER (TLB) PREFETCHING WITH ADAPTIVE OFFSET - A processing device implementing stride-based translation lookaside buffer (TLB) prefetching with adaptive offset is disclosed. A processing device of the disclosure includes a data prefetcher to generate a data prefetch address based on a linear address, a stride, or a prefetch distance, the data prefetch address associated with a data prefetch request, and a TLB prefetch address computation component to generate a TLB prefetch address based on the linear address, the stride, the prefetch distance, or an adaptive offset. The processing device also includes a cross page detection component to determine that the data prefetch address or the TLB prefetch address cross a page boundary associated with the linear address, and cause a TLB prefetch request to be written to a TLB request queue, the TLB prefetch request for translation of an address of a linear page number (LPN) based on the data prefetch address or the TLB prefetch address. | 09-18-2014 |
20140281352 | MECHANISM FOR FACILITATING DYNAMIC AND EFFICIENT MANAGEMENT OF TRANSLATION BUFFER PREFETCHING IN SOFTWARE PROGRAMS AT COMPUTING SYSTEMS - A mechanism is described for facilitating dynamic and efficient binary translation-based translation lookaside buffer prefetching according to one embodiment. A method of embodiments, as described herein, includes translating code blocks into code translation blocks at a computing device. The code translation blocks are submitted for execution. The method may further include tracking, in runtime, dynamic system behavior of the code translation blocks, and inferring translation lookaside buffer (TLB) prefetching based on the analysis of the tracked dynamic system behavior. | 09-18-2014 |
20150058592 | INTER-CORE COOPERATIVE TLB PREFETCHERS - A chip multiprocessor includes a plurality of cores each having a translation lookaside buffer (TLB) and a prefetch buffer (PB). Each core is configured to determine a TLB miss on the core's TLB for a virtual page address and determine whether or not there is a PB hit on a PB entry in the PB for the virtual page address. If it is determined that there is a PB hit, the PB entry is added to the TLB. If it is determined that there is not a PB hit, the virtual page address is used to perform a page walk to determine a translation entry, the translation entry is added to the TLB and the translation entry is prefetched to each other one of the plurality of cores. | 02-26-2015 |
20150082000 | SYSTEM-ON-CHIP AND ADDRESS TRANSLATION METHOD THEREOF - A memory management unit comprises an address translation unit that receives a memory access request as a virtual address and translates the virtual address to a physical address. A translation lookaside buffer stores page descriptors of a plurality of physical addresses, the address translation unit determining whether a page descriptor of a received virtual address is present in the translation lookaside buffer. A prefetch buffer stores page descriptors of the plurality of physical addresses. The address translation unit, in the event the page descriptor of the received virtual address is not present in the translation lookaside buffer, further determines whether the page descriptor of the received virtual address is present in the prefetch buffer; updates the translation lookaside buffer with the page descriptor in response to the determination; and performs a translation of the virtual address to a physical address using the page descriptor. | 03-19-2015 |
20160179688 | SIMULTANEOUS INVALIDATION OF ALL ADDRESS TRANSLATION CACHE ENTRIES ASSOCIATED WITH X86 PROCESS CONTEXT IDENTIFIER | 06-23-2016 |
20160179700 | Hiding Page Translation Miss Latency in Program Memory Controller By Selective Page Miss Translation Prefetch | 06-23-2016 |
20160179701 | ADDRESS TRANSLATION CACHE THAT SUPPORTS SIMULTANEOUS INVALIDATION OF COMMON CONTEXT ENTRIES | 06-23-2016 |