Entries |
Document | Title | Date |
20080222343 | MULTIPLE ADDRESS SEQUENCE CACHE PRE-FETCHING - A method is provided for pre-fetching data into a cache memory. A first cache-line address of each of a number of data requests from at least one processor is stored. A second cache-line address of a next data request from the processor is compared to the first cache-line addresses. If the second cache-line address is adjacent to one of the first cache-line addresses, data associated with a third cache-line address adjacent to the second cache-line address is pre-fetched into the cache memory, if not already present in the cache memory. | 09-11-2008 |
20080244152 | Method and Apparatus for Configuring Buffers for Streaming Data Transfer - A specification of a configurable processor is generated by generating (1) specifications of first and second stream memory interfaces to be operable to access data in accordance with first and second stream descriptors, and (2) a specification of an interim data storage device (buffer) to be accessed by the first and second stream memory interfaces and to be operable to receive data from a first computational module via the first stream memory interface and to transfer data to a second computational module via the second stream memory interface. The specifications are output and may be used to configure a configurable processor. | 10-02-2008 |
20080244153 | CACHE SYSTEMS, COMPUTER SYSTEMS AND OPERATING METHODS THEREOF - Cache systems, computer systems and methods thereof are disclosed. A buffer buffers first data from a main memory prior to writing to the cache memory. In response to a cache hit, a word from the cache memory is read. In response to a cache miss, the first data is written from the buffer to the cache memory. When the cache hit occurs before all first data is written from the buffer to the cache memory, the reading is executed and the writing is paused. | 10-02-2008 |
20080263257 | Checkpointed Tag Prefetcher - A dual-mode prefetch mechanism for implementing checkpoint tag prefetching includes: a data array for storing data fetched from cache memory; a set of cache tags for identifying the data stored in the data array; a set of checkpoint tags for storing data identification; a cache controller including prefetch logic, the prefetch logic including a checkpoint prefetch controller and a checkpoint prefetch operator. | 10-23-2008 |
20080282019 | PROGRAM IDENTIFICATION USING A PORTABLE COMMUNICATION DEVICE - According to one aspect, a portable communication device records a program being presented by a media presenting apparatus as media data, generates a query regarding a media channel and a program on that channel, which query includes said media data and sends said query to a system for determining a program on a media channel operated by a program determination service provider. The system receives the query, compares the query media data with data of a number of sets of reference media data related to at least one reception environment, where each set corresponds to a broadcast media channel, identifies the media channel, identifies a program in the media channel through using an electronic program guide, and sends data identifying the channel and the program to the portable communication device. | 11-13-2008 |
20090049226 | STALE TRACK INITIALIZATION IN A STORAGE CONTROLLER - Deleting a data volume from a storage system and freeing its storage space to make it available to be allocated to a new volume is accomplished by only zeroing associated metadata for the tracks contained in the freed storage space which is then reused in a new volume allocation and an attempt is made by the new volume to read a first record R | 02-19-2009 |
20090132749 | Cache memory system - Systems and methods are disclosed for pre-fetching data into a cache memory system. These systems and methods comprise retrieving a portion of data from a system memory and storing a copy of the retrieved portion of data in a cache memory. These systems and methods further comprise monitoring data that has been placed into pre-fetch memory. | 05-21-2009 |
20090132750 | Cache memory system - The present disclosure provides systems and methods for a cache memory and a cache load circuit. The cache load circuit is capable of retrieving a portion of data from the system memory and of storing a copy of the retrieved portion of data in the cache memory. In addition, the systems and methods comprise a monitoring circuit for monitoring accesses to data in the system memory. | 05-21-2009 |
20090172243 | Providing metadata in a translation lookaside buffer (TLB) - In one embodiment, the present invention includes a translation lookaside buffer (TLB) to store entries each having a translation portion to store a virtual address (VA)-to-physical address (PA) translation and a second portion to store bits for a memory page associated with the VA-to-PA translation, where the bits indicate attributes of information in the memory page. Other embodiments are described and claimed. | 07-02-2009 |
20090187695 | HANDLING CONCURRENT ADDRESS TRANSLATION CACHE MISSES AND HITS UNDER THOSE MISSES WHILE MAINTAINING COMMAND ORDER - Apparatus handles concurrent address translation cache misses and hits under those misses while maintaining command order based upon virtual channel. Commands are stored in a command processing unit that maintains ordering of the commands. A command buffer index is assigned to each address being sent from the command processing unit to an address translation unit. When an address translation cache miss occurs, a memory fetch request is sent. The CBI is passed back to the command processing unit with a signal to indicate that the fetch request has completed. The command processing unit uses the CBI to locate the command and address to be reissued to the address translation unit. | 07-23-2009 |
20090198865 | DATA PROCESSING SYSTEM, PROCESSOR AND METHOD THAT PERFORM A PARTIAL CACHE LINE STORAGE-MODIFYING OPERATION BASED UPON A HINT - In at least one embodiment, a method of data processing in a data processing system having a memory hierarchy includes a processor core executing a storage-modifying memory access instruction to determine a memory address. The processor core transmits to a cache memory within the memory hierarchy a storage-modifying memory access request including the memory address, an indication of a memory access type, and, if present, a partial cache line hint signaling access to less than all granules of a target cache line of data associated with the memory address. In response to the storage-modifying memory access request, the cache memory performs a storage-modifying access to all granules of the target cache line of data if the partial cache line hint is not present and performs a storage-modifying access to less than all granules of the target cache line of data if the partial cache line hint is present. | 08-06-2009 |
20090292857 | CACHE MEMORY UNIT - A cache memory unit temporarily stores data having been stored in a main memory, the valid bits of the flag memory corresponding to the lines of the entries at the to-be-invalidated entry addresses are rewritten so as to indicate invalidation of the lines of the entries at the to-be-invalidated entry addresses, so that the lines of the entries at the to-be-invalidated entry addresses are invalidated. | 11-26-2009 |
20100088457 | CACHE MEMORY ARCHITECTURE HAVING REDUCED TAG MEMORY SIZE AND METHOD OF OPERATION THEREOF - A cache memory architecture, a method of operating a cache memory and a memory controller. In one embodiment, the cache memory architecture includes: (1) a segment memory configured to contain at least one most significant bit (MSB) of a main memory address, the at least one MSB being common to addresses in a particular main memory logical segment that includes the main memory address, (2) a tag memory configured to contain tags that include other bits of the main memory address and (3) combinatorial logic associated with the segment memory and the tag memory and configured to indicate a cache hit only when both the at least one most significant bit and the other bits match a requested main memory address. | 04-08-2010 |
20100122012 | SYSTOLIC NETWORKS FOR A SPIRAL CACHE - Systolic networks within a tiled storage array provide for movement of requested values to a front-most tile, while making space for the requested values at the front-most tile by moving other values away. A first and second information pathway provide different linear pathways through the tiles. The movement of other values, requests for values and responses to requests is controlled according to a clocking logic that governs the movement on the first and second information pathways according to a systolic duty cycle. The first information pathway may be a move-to-front network of a spiral cache, crossing the spiral push-back network which forms the push-back network. The systolic duty cycle may be a three-phase duty cycle, or a two-phase duty cycle may be provided if the storage tiles support a push-back swap operation. | 05-13-2010 |
20100122013 | DATA STRUCTURE FOR ENFORCING CONSISTENT PER-PHYSICAL PAGE CACHEABILITY ATTRIBUTES - A data structure for enforcing consistent per-physical page cacheability attributes is disclosed. The data structure is used with a method for enforcing consistent per-physical page cacheability attributes, which maintains memory coherency within a processor addressing memory, such as by comparing a desired cacheability attribute of a physical page address in a PTE against an authoritative table that indicates the current cacheability status. This comparison can be made at the time the PTE is inserted into a TLB. When the comparison detects a mismatch between the desired cacheability attribute of the page and the page's current cacheability status, corrective action can be taken to transition the page into the desired cacheability state. | 05-13-2010 |
20100161873 | COMPRESSED CACHE CONTROLLER VALID WORD STATUS USING POINTERS - An apparatus having a memory and a controller is disclosed. The memory may be configured to (i) store a plurality of cache lines, each of the cache line comprising a plurality of locations including a respective end location and (ii) accessing a particular one of the cache lines identified by a cache address signal. The controller may be configured to (i) buffer a plurality of line pointers, each of the line pointers identifying a respective boundary one of the locations in one of the cache lines and (ii) generate the cache address signal in response to a processor address signal hitting a given one of the locations residing between the respective boundary location and the respective end location. | 06-24-2010 |
20100191893 | Dual Access for Single Port Cache - A method and system for accessing a single port multi-way cache includes an address multiplexer that simultaneously addresses a set of data and a set of program instructions in the multi-way cache. Duplicate output way multiplexers respectively select data and program instructions read from the cache responsive to the address multiplexer. | 07-29-2010 |
20100205344 | UNIFIED CACHE STRUCTURE THAT FACILITATES ACCESSING TRANSLATION TABLE ENTRIES - One embodiment provides a system that includes a processor with a unified cache structure that facilitates accessing translation table entries (TTEs). This unified cache structure can simultaneously store program instructions, program data, and TTEs. During a memory access, the system receives a virtual memory address. The system then uses this virtual memory address to identify one or more cache lines in the unified cache structure which are associated with the virtual memory address. Next, the system compares a tag portion of the virtual memory address with the tags for the identified cache line(s) to identify a cache line that matches the virtual memory address. The system then loads a translation table entry that corresponds to the virtual memory address from the identified cache line. | 08-12-2010 |
20100217914 | MEMORY ACCESS DETERMINATION CIRCUIT, MEMORY ACCESS DETERMINATION METHOD AND ELECTRONIC DEVICE - A memory access determination circuit includes a counter that outputs a first value counted by using a first reference value, and a control unit that makes a cache determination of an address corresponding to an output of the counter, wherein, when a cache miss occurs for the address, the counter outputs a second value by using a second reference value. | 08-26-2010 |
20100262750 | REGION PREFETCHER AND METHODS THEREOF - A prefetch device and method are disclosed that determines from which addresses to speculatively fetch data based on information collected regarding previous cache-miss addresses. A historical record showing a propensity to experience cache-misses at a particular address-offset from a prior cache-miss address within a region of memory provides an indication that data needed by future instructions has an increased likelihood to be located at a similar offset from a current cache-miss address. The prefetch device disclosed herein maintains a record of the relationship between a cache-miss address and subsequent cache-miss addresses for the most recent sixty-four unique data manipulation instructions that resulted in a cache-miss. The record includes a weighted confidence value indicative of how many cache-misses previously occurred at each of a selection of offsets from a particular cache-miss address. | 10-14-2010 |
20100287327 | COMPUTING SYSTEMS AND METHODS FOR MANAGING FLASH MEMORY DEVICE - A computing system is provided. A flash memory device includes at least one mapping block, at least one modification block and at least one cache block. A processor is configured to perform: receiving a write command with a write logical address and predetermined data, loading content of a cache page from the cache block corresponding to the modification block according to the write logical address to a random access memory device in response to that a page of the mapping block corresponding to the write logical address has been used, the processor, reading orderly the content of the cache page stored in the random access memory device to obtain location information of an empty page of the modification block, and writing the predetermined data to the empty page according to the location information. Each cache page includes data fields to store location information corresponding to the data has been written in the pages of the modification block in order. | 11-11-2010 |
20100332716 | METAPHYSICALLY ADDRESSED CACHE METADATA - Storing metadata that is disjoint from corresponding data by storing the metadata to the same address as the corresponding data but in a different address space. A metadata store instruction includes a storage address for the metadata. The storage address is the same address as that for data corresponding to the metadata, but the storage address when used for the metadata is implemented in a metadata address space while the storage address, when used for the corresponding data is implemented in a different data address space. As a result of executing the metadata store instruction, the metadata is stored at the storage address. A metadata load instruction includes the storage address for the metadata. As a result of executing the metadata load instruction, the metadata stored at the address is received. Some embodiments may further implement a metadata clear instruction which clears any entries in the metadata address space. | 12-30-2010 |
20100332717 | ACCESS DEVICE, INFORMATION RECORDING DEVICE, CONTROLLER, AND INFORMATION RECORDING SYSTEM - Provided is a method that, in the case of managing areas of a non-volatile memory of an information recording module by a file system, increases the speed of processing for writing file data and file system management information, and furthermore prevents a decrease in the rewriting lifetime of the non-volatile memory. The information recording module ( | 12-30-2010 |
20110022773 | Fine Grained Cache Allocation - A mechanism is provided in a virtual machine monitor for fine grained cache allocation in a shared cache. The mechanism partitions a cache tag into a most significant bit (MSB) portion and a least significant bit (LSB) portion. The MSB portion of the tags is shared among the cache lines in a set. The LSB portion of the tags is private, one per cache line. The mechanism allows software to set the MSB portion of tags in a cache to allocate sets of cache lines. The cache controller determines whether a cache line is locked based on the MSB portion of the tag. | 01-27-2011 |
20110022774 | CACHE MEMORY CONTROL METHOD, AND INFORMATION STORAGE DEVICE COMPRISING CACHE MEMORY - According to a cache memory control method of an embodiment, a data write position in a segment of a cache memory is changed to an address to which a lower bit of a logical block address of write data is added as an offset. Then, even if writing is completed within the segment of the cache memory, the remaining regions of the segment is not wasted. | 01-27-2011 |
20110035531 | COHERENCY CONTROL SYSTEM, COHERENCY CONTROL APPARATUS AND COHERENCY CONTROL METHOD - A coherency control system includes a logical-physical address translation unit which translates a logical address including a first tag and an index address into a physical address including a second tag and the index address, a request output unit which transmits a load request, a corresponding state storage unit which stores a relation state between an area of the second storage apparatus and an area of the first storage apparatus based on the way number included in the load request and the second tag and the index address of the physical address also included in the load request which has been received, and an invalidation instructing unit which transmits an invalidation instruction including the index address and the way number based on the second tag of the physical address included in the store request and the relation state stored in the corresponding state storage unit. | 02-10-2011 |
20110047314 | FAST AND EFFICIENT DETECTION OF BREAKPOINTS - A microprocessor breakpoint checks a load/store operation specifying a load/store virtual address of data whose first and second pieces are within first and second cache lines. A queue of entries each include first storage for an address associated with the operation and second storage for an indicator indicating whether there is a match between a page address portion of the virtual address and a page address portion of a breakpoint address. During a first pass through a load/store unit pipeline, the unit performs a first piece breakpoint check using the virtual address, populates the second storage indicator, and populates the first storage with a physical address translated from the virtual address. During the second pass, the unit performs a second piece breakpoint check using the indicator received from the second storage and an incremented version of a page offset portion of the load/store physical address received from the first storage. | 02-24-2011 |
20110066785 | Memory Management System and Method Thereof - A memory management system and method include and use a cache buffer (such as a table look-aside buffer, TLB), a memory mapping table, a scratchpad cache, and a memory controller. The cache buffer is configured to store a plurality of data structures. The memory mapping table is configured to store a plurality of addresses of the data structures. The scratchpad cache is configured to store the base address of the data structures. The memory controller is configured to control reading and writing in the cache buffer and the scratchpad cache. The components are operable together under control of the memory controller to facilitate effective searching of the data structures in the memory management system. | 03-17-2011 |
20110072187 | DYNAMIC STORAGE OF CACHE DATA FOR SOLID STATE DISKS - Described embodiments provide a media controller that determines the size of a cache of data being transferred between a host device and one or more sectors of a storage device. The one or more sectors are segmented into a plurality of chunks, and each chunk corresponds to at least one sector. The contents of the cache are managed in a cache hash table. At startup of the media controller, a buffer layer module of the media controller initializes the cache in a buffer of the media controller. During operation of the media controller, the buffer layer module determines a number of chunks allocated to the cache. Based on the number of chunks allocated to the cache, the buffer layer module updates the size of the of the cache hash table. | 03-24-2011 |
20110078358 | DEFERRED COMPLETE VIRTUAL ADDRESS COMPUTATION FOR LOCAL MEMORY SPACE REQUESTS - One embodiment of the present invention sets forth a technique for computing virtual addresses for accessing thread data. Components of the complete virtual address for a thread group are used to determine whether or not a cache line corresponding to the complete virtual address is not allocated in the cache. Actual computation of the complete virtual address is deferred until after determining that a cache line corresponding to the complete virtual address is not allocated in the cache. | 03-31-2011 |
20110093645 | METHOD AND APPARATUS TO RECORD DATA, METHOD AND APPARATUS TO REPRODUCE DATA, AND RECORDING MEDIUM - A data recording method including, when moving data stored in a cache to a data storage medium, selecting one cache area from an extended cache area group of the data storage medium by using managing information of a translation layer, moving the data stored in the cache to the selected cache area by using a physical address of the data storage medium on the selected cache area, and updating the managing information of the translation layer, wherein the managing information of the translation layer includes a physical block address-based address of the extended cache area group in the data storage medium. | 04-21-2011 |
20110119426 | LIST BASED PREFETCH - A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the list address. | 05-19-2011 |
20110161548 | Efficient Multi-Level Software Cache Using SIMD Vector Permute Functionality - A cache manager receives a request for data, which includes a requested effective address. The cache manager determines whether the requested effective address matches a most recently used effective address stored in a mapped tag vector. When the most recently used effective address matches the requested effective address, the cache manager identifies a corresponding cache location and retrieves the data from the identified cache location. However, when the most recently used effective address fails to match the requested effective address, the cache manager determines whether the requested effective address matches a subsequent effective address stored in the mapped tag vector. When the cache manager determines a match to a subsequent effective address, the cache manager identifies a different cache location corresponding to the subsequent effective address and retrieves the data from the different cache location. | 06-30-2011 |
20110161549 | MEMORY CONTROL DEVICE AND CACHE MEMORY CONTROLLING METHOD - A memory control device for controlling an access from a processing unit to a cache memory, the memory control device includes: an address estimation circuit for receiving a first read address of the cache memory from the processing unit and estimating a second read address on the basis of the first read address; an access start detection circuit for detecting an access start of accessing cache memory at the first read address and outputting an access start signal; a data control circuit for receiving read data from the cache memory and for outputting the read data to the processing unit; and a clock control circuit for controlling a read clock to be output to the processing unit in response to the access start signal, the processing unit receiving the read data from the data control circuit with the read clock. | 06-30-2011 |
20110185104 | Merging Subsequent Updates To A Memory Location - A method of merging subsequent updates to a memory location includes receiving, at a first stage in an update pipeline, a first request to update a status word at a first address of a cache memory and receiving the status word from the cache memory. The method continues with determining, at a stage subsequent to the first stage, that a second request to update the status word has been received. Further included is updating the status word according to the first and second requests to form an updated status word and writing the updated status word to the cache memory. | 07-28-2011 |
20110197013 | CACHE SYSTEM - A cache system includes a primary cache memory configured to input and output data between a computation unit, the primary cache memory includes multi-port memory units each including a storing unit that stores unit data having a first data size, a writing unit that simultaneously writes sequentially inputted plural unit data to consecutive locations of the storing unit, and an outputting unit that reads out and outputs unit data written in the storing unit, wherein when writing data having a second data size that is an arbitrary multiple of a first data size and is segmented into unit data to the primary cache memory, the data is stored in different multi-port memory units by writing the sequential unit data to a subset of the multi-port memory units, and writing the other sequential unit data to another subset of the multi-port memory units. | 08-11-2011 |
20110208894 | PHYSICAL ALIASING FOR THREAD LEVEL SPECULATION WITH A SPECULATION BLIND CACHE - A multiprocessor system includes nodes. Each node includes a data path that includes a core, a TLB, and a first level cache implementing disambiguation. The system also includes at least one second level cache and a main memory. For thread memory access requests, the core uses an address associated with an instruction format of the core. The first level cache uses an address format related to the size of the main memory plus an offset corresponding to hardware thread meta data. The second level cache uses a physical main memory address plus software thread meta data to store the memory access request. The second level cache accesses the main memory using the physical address with neither the offset nor the thread meta data after resolving speculation. In short, this system includes mapping of a virtual address to a different physical addresses for value disambiguation for different threads. | 08-25-2011 |
20110231593 | VIRTUAL ADDRESS CACHE MEMORY, PROCESSOR AND MULTIPROCESSOR - An embodiment provides a virtual address cache memory including: a TLB virtual page memory configured to, when a rewrite to a TLB occurs, rewrite entry data; a data memory configured to hold cache data using a virtual page tag or a page offset as a cache index; a cache state memory configured to hold a cache state for the cache data stored in the data memory, in association with the cache index; a first physical address memory configured to, when the rewrite to the TLB occurs, rewrite a held physical address; and a second physical address memory configured to, when the cache data is written to the data memory after the occurrence of the rewrite to the TLB, rewrite a held physical address. | 09-22-2011 |
20110252180 | MEMORY CONTROLLER MAPPING ON-THE-FLY - Systems, methods, and devices for dynamically mapping and remapping memory when a portion of memory is activated or deactivated are provided. In accordance with an embodiment, an electronic device may include several memory banks, one or more processors, and a memory controller. The memory banks may store data in hardware memory locations and may be independently deactivated. The processors may request the data using physical memory addresses, and the memory controller may translate the physical addresses to hardware memory locations. The memory controller may use a first memory mapping function when a first number of memory banks is active and a second memory mapping function when a second number is active. When one of the memory banks is to be deactivated, the memory controller may copy data from only the memory bank that is to be deactivated to the active remainder of memory banks | 10-13-2011 |
20110283040 | Multiple Page Size Segment Encoding - An approach identifies an amount of high order bits used to store a memory address in a memory address field that is included in a memory. This approach calculates at least one minimum number of low order bits not used to store the address with the calculation being based on the identified amount of high order bits. The approach retrieves a data element from one of the identified minimum number of low order bits of the address field and also retrieves a second data element from one of the one of the identified minimum number of low order bits of the address field. | 11-17-2011 |
20110283041 | CACHE MEMORY AND CONTROL METHOD THEREOF - A cache memory comprises a data array that stores a cashed block; a first address array that stores an address of the cached block; a second address array that stores an address of a first block to be removed from the data array when a cache miss occurs; and a control unit that transmits to a processor the first block stored in the data array as a cache hit block, when the address stored in the second address array results in a cache hit during a period before a second block which has caused the cache miss is read from a memory and written into the data array. | 11-17-2011 |
20110289256 | MEMORY BANKING SYSTEM AND METHOD TO INCREASE MEMORY BANDWIDTH VIA PARALLEL READ AND WRITE OPERATIONS - A cache memory and a tag memory are included in a banked memory system and used to effectively enable parallel write and read operations on each clock cycle, even though the memory banks consist of single-port devices that are not inherently capable of parallel write and read operations. | 11-24-2011 |
20110289257 | METHOD AND APPARATUS FOR ACCESSING CACHE MEMORY - A request for reading data from a memory location of a main memory is received, the memory location being identified by a physical memory address. In response to the request, a cache memory is accessed based on the physical memory address to determine whether the cache memory contains the data being requested. The data associated with the request is returned from the cache memory without accessing the memory location if there is a cache hit. The data associated is returned from the main memory if there is a cache miss. In response to the cache miss, it is determined whether there have been a number of accesses within a predetermined period of time. A cache entry is allocated from the cache memory to cache the data if there have been a predetermined number of accesses within the predetermined period of time. | 11-24-2011 |
20110314202 | MANAGING CACHE DATA AND METADATA - Embodiments of the invention provide techniques for managing cache metadata providing a mapping between addresses on a storage medium (e.g., disk storage) and corresponding addresses on a cache device at data items are stored. In some embodiments, cache metadata may be stored in a hierarchical data structure comprising a plurality of hierarchy levels. When a reboot of the computer is initiated, only a subset of the plurality of hierarchy levels may be loaded to memory, thereby expediting the process of restoring the cache metadata and thus startup operations. Startup may be further expedited by using cache metadata to perform operations associated with reboot. Thereafter, as requests to read data items on the storage medium are processed using cache metadata to identify addresses at which the data items are stored in cache, the identified addresses may be stored in memory. When the computer is later shut down, instead of having to transfer the entirety of the cache metadata from memory to storage, only the subset of the plurality of hierarchy levels and/or the identified addresses previously loaded to memory may be transferred (e.g., to the cache device), thereby expediting the shutdown of the computer. | 12-22-2011 |
20120030403 | Memory Module, Cache System and Address Conversion Method - A memory system including a non-volatile memory, a cache memory, a control circuit, and a data processing device is configured. The high speed can be achieved by transferring data in the non-volatile memory to the cache memory to retain the same therein. When the data in the non-volatile memory is transferred to the cache memory, error correction is performed so as to improve the reliability. Since the cache memory and the non-volatile memory can be accessed from the data processing device independently, improvement in usability can be achieved. The memory system including the plurality of chips is configured as a memory system module where respective chips are arranged in a stacked manner and wired by a ball grid array (BGA) and wire bonding between chips. | 02-02-2012 |
20120047311 | METHOD AND SYSTEM OF HANDLING NON-ALIGNED MEMORY ACCESSES - A method and system to facilitate full throughput operation of cache memory line split accesses in a device. By facilitating full throughput operation of cache memory line split accesses in the device, the device minimizes the performance and throughput loss associated with the handling of non-aligned cache memory accesses that cross two or more cache memory lines and/or page memory boundaries in one embodiment of the invention. When the device receives a non-aligned cache memory access request, the merge logic combines or merges the incoming data of a particular cache memory line from a data cache memory with the stored data of the preceding cache memory line of the particular cache memory line. | 02-23-2012 |
20120059971 | METHOD AND APPARATUS FOR HANDLING CRITICAL BLOCKING OF STORE-TO-LOAD FORWARDING - The present invention provides a method and apparatus for handling critical blocking of store-to-load forwarding. One embodiment of the method includes recording a load that matches an address of a store in a store queue before the store has valid data. The load is blocked because the store does not have valid data. The method also includes replaying the load in response to the store receiving valid data so that the valid data is forwarded from the store queue to the load. | 03-08-2012 |
20120096213 | CACHE MEMORY DEVICE, CACHE MEMORY CONTROL METHOD, PROGRAM AND INTEGRATED CIRCUIT - To aim to provide a cache memory device that performs a line size determination process for determining a refill size, in advance of a refill process that is performed at cache miss time. According to the line size determination process, the number of reads/writes of a management target line that belongs to a set is acquired (S | 04-19-2012 |
20120144089 | SCATTER/GATHER ACCESSING MULTIPLE CACHE LINES IN A SINGLE CACHE PORT - Methods and apparatus are disclosed for accessing multiple data cache lines for scatter/gather operations. Embodiment of apparatus may comprise address generation logic to generate an address from an index of a set of indices for each of a set of corresponding mask elements having a first value. Line or bank match ordering logic matches addresses in the same cache line or different banks, and orders an access sequence to permit a group of addresses in multiple cache lines and different banks. Address selection logic directs the group of addresses to corresponding different banks in a cache to access data elements in multiple cache lines corresponding to the group of addresses in a single access cycle. A disassembly/reassembly buffer orders the data elements according to their respective bank/register positions, and a gather/scatter finite state machine changes the values of corresponding mask elements from the first value to a second value. | 06-07-2012 |
20120166703 | METHOD AND SYSTEM FOR CACHING ATTRIBUTE DATA FOR MATCHING ATTRIBUTES WITH PHYSICAL ADDRESSES - A method for caching attribute data for matching attributes with physical addresses. The method includes storing a plurality of attribute entries in a memory, wherein the memory is configured to provide at least one attribute entry when accessed with a physical address, and wherein the attribute entry provided describes characteristics of the physical address. | 06-28-2012 |
20120179853 | MEMORY ADDRESS TRANSLATION - The present disclosure includes devices, systems, and methods for memory address translation. One or more embodiments include a memory array and a controller coupled to the array. The array includes a first table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a data segment stored in the array and a logical address. The controller includes a second table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a record in the first table and a logical address. The controller also includes a third table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a record in the second table and a logical address. | 07-12-2012 |
20120198121 | METHOD AND APPARATUS FOR MINIMIZING CACHE CONFLICT MISSES - A method for minimizing cache conflict misses is disclosed. A translation table capable of facilitating the translation of a virtual address to a real address during a cache access is provided. The translation table includes multiple entries, and each entry of the translation table includes a page number field and a hash value field. A hash value is generated from a first group of bits within a virtual address, and the hash value is stored in the hash value field of an entry within the translation table. In response to a match on the entry within the translation table during a cache access, the hash value of the matched entry is retrieved from the translation table, and the hash value is concatenated with a second group of bits within the virtual address to form a set of indexing bits to index into a cache set. | 08-02-2012 |
20120198122 | GUEST TO NATIVE BLOCK ADDRESS MAPPINGS AND MANAGEMENT OF NATIVE CODE STORAGE - A method for managing mappings of storage on a code cache for a processor. The method includes storing a plurality of guest address to native address mappings as entries in a conversion look aside buffer, wherein the entries indicate guest addresses that have corresponding converted native addresses stored within a code cache memory, and receiving a subsequent request for a guest address at the conversion look aside buffer. The conversion look aside buffer is indexed to determine whether there exists an entry that corresponds to the index, wherein the index comprises a tag and an offset that is used to identify the entry that corresponds to the index. Upon a hit on the tag, the corresponding entry is accessed to retrieve a pointer to the code cache memory corresponding block of converted native instructions. The corresponding block of converted native instructions are fetched from the code cache memory for execution. | 08-02-2012 |
20120203950 | ADDRESS TRANSLATION CACHING AND I/O CACHE PERFORMANCE IMPROVEMENT IN VIRTUALIZED ENVIRONMENTS - Methods and apparatus relating to improving address translation caching and/or input/output (I/O) cache performance in virtualized environments are described. In one embodiment, a hint provided by an endpoint device may be utilized to update information stored in an I/O cache. Such information may be utilized for implementation of a more efficient replacement policy in an embodiment. Other embodiments are also disclosed. | 08-09-2012 |
20120210041 | APPARATUS, SYSTEM, AND METHOD FOR CACHING DATA - An apparatus, system, and method are disclosed for caching data. A storage request module detects an input/output (“I/O”) request for a storage device cached by solid-state storage media of a cache. A direct mapping module references a single mapping structure to determine that the cache comprises data of the I/O request. The single mapping structure maps each logical block address of the storage device directly to a logical block address of the cache. The single mapping structure maintains a fully associative relationship between logical block addresses of the storage device and physical storage addresses on the solid-state storage media. A cache fulfillment module satisfies the I/O request using the cache in response to the direct mapping module determining that the cache comprises at least one data block of the I/O request. | 08-16-2012 |
20120215959 | Cache Memory Controlling Method and Cache Memory System For Reducing Cache Latency - Disclosed is a cache memory controlling method for reducing cache latency. The method includes sending a target address to a tag memory storing tag data and sending the target address to a second group data memory that has a latency larger than that of a first group data memory. The method further includes generating and outputting a cache signal that indicates whether the first group data memory includes target data and that indicates whether the second group data memory includes target data. The target address is sent to the second group data memory before the output of the cache signal. With an exemplary embodiment, cache latency is minimized or reduced, and the performance of a cache memory system is improved. | 08-23-2012 |
20120233377 | Cache System and Processing Apparatus - According to an embodiment, a cache system includes a volatile cache memory, a nonvolatile cache memory, an address decoder, and an evacuation unit. The nonvolatile cache memory has a capacity equal to the volatile cache memory. The address decoder designates a same line to the volatile cache memory and the nonvolatile cache memory. The evacuation unit stores data which is inputted from the volatile cache memory and outputs the stored data to the volatile cache memory. | 09-13-2012 |
20120297109 | FACILITATING DATA COHERENCY USING IN-MEMORY TAG BITS AND FAULTING STORES - Fine-grained detection of data modification of original data is provided by associating separate guard bits with granules of memory storing the original data from which translated data has been obtained. The guard bits facilitate indicating whether the original data stored in the associated granule is indicated as protected. The guard bits are set and cleared by special-purpose instructions. Responsive to initiating a data store operation to modify the original data, the associated guard bit(s) are checked to determine whether the original data is indicated as protected. Responsive to the checking indicating that a guard bit is set for the associated original data, the data store operation to modify the original data is faulted and the translated data is discarded, thereby facilitating data coherency between the original data and the translated data. | 11-22-2012 |
20120297110 | METHOD AND APPARATUS FOR IMPROVING COMPUTER CACHE PERFORMANCE AND FOR PROTECTING MEMORY SYSTEMS AGAINST SOME SIDE CHANNEL ATTACKS - A physical cache memory that is divided into one or more virtual segments using multiple circuits to decode addresses is provided. An address mapping and an address decoder is selected for each virtual segment. The address mapping comprises two or more address bits as set indexes for the virtual segment and the selected address bits are different for each virtual segment. A cache address decoder is provided for each virtual segment to enhance execution performance of programs or to protect against the side channel attack. Each physical cache address decoder comprises an address mask register to extract the selected address bits to locate objects in the virtual segment. The foregoing can be implemented as a method or apparatus for protecting against a side channel attack. | 11-22-2012 |
20120303857 | Checkpointed Tag Prefetcher - A cache management method using checkpoint tags in checkpoint mode includes steps of: receiving a request to save data; fetching at least one cache block including the data from cache memory; writing the data from the at least one cache block into the data array; writing a physical address and metadata of the cache block into an array of cache memory tags; and upon receipt of a restore request: fetching an identifier for the at least one cache block stored in the checkpoint tag array; reloading the cache memory with the at least one cache block in the checkpoint tag array; and switching to normal mode. | 11-29-2012 |
20120324141 | SYSTEMS AND METHODS PROVIDING WEAR LEVELING USING DYNAMIC RANDOMIZATION FOR NON-VOLATILE MEMORY - Systems and methods for dynamically remapping elements of a set to another set based on random keys. Application of said systems and methods to dynamically mapping regions of memory space of non-volatile memory, e.g., phase-change memory, can provide a wear-leveling technique. The wear leveling technique can be effective under normal execution of typical applications, and in worst-case scenarios including the presence of malicious exploits and/or compromised operating systems, wherein constantly migrating the physical location of data inside the PCM avoids information leakage and increases security; wherein random relocation of data results in the distribution of memory requests across the physical memory space increases durability; and wherein such wear leveling schemes can be implemented to provide fine-grained wear leveling without overly-burdensome hardware overhead e.g., a look-up table. | 12-20-2012 |
20120324142 | LIST BASED PREFETCH - A list prefetch engine improves a performance of a parallel computing system. The list prefetch engine receives a current cache miss address. The list prefetch engine evaluates whether the current cache miss address is valid. If the current cache miss address is valid, the list prefetch engine compares the current cache miss address and a list address. A list address represents an address in a list. A list describes an arbitrary sequence of prior cache miss addresses. The prefetch engine prefetches data according to the list, if there is a match between the current cache miss address and the list address. | 12-20-2012 |
20130019047 | MEMORY CONFLICTS LEARNING CAPABILITYAANM Podvalny; DmitryAACI Petah TikvaAACO ILAAGP Podvalny; Dmitry Petah Tikva ILAANM Shinkar; AlexAACI Rishon-LezionAACO ILAAGP Shinkar; Alex Rishon-Lezion ILAANM Rachlevski; AssafAACI ModiinAACO ILAAGP Rachlevski; Assaf Modiin IL - An apparatus having a memory and circuit is disclosed. The memory may (i) assert a first signal in response to detecting a conflict between at least two addresses requesting access to a block at a first time, (ii) generate a second signal in response to a cache miss caused by an address requesting access to the block at a second time and (iii) store a line fetched in response to the cache miss in another block by adjusting the first address by an offset. The second time is generally after the first time. The circuit may (i) generate the offset in response to the assertion of the first signal and (ii) present the offset in a third signal to the memory in response to the assertion of the second signal corresponding to reception of the first address at the second time. The offset is generally associated with the first address. | 01-17-2013 |
20130024597 | TRACKING MEMORY ACCESS FREQUENCIES AND UTILIZATION - A method is provided including recording, in a counter of a set of counters, a number of cache accesses for a page corresponding to a translation lookaside buffer (TLB) page table entry, where the counters are physically grouped together and physically separate from the TLB. The method also includes recording the number of cache accesses from the corresponding counter to a field of the page table responsive to an event. An apparatus is provided that includes a memory unit and a set of counters coupled to the one memory unit, the set of counters comprises one or more counters that are physically grouped together and are adapted to store a value indicative of a number of memory page accesses. The apparatus includes a cache coupled to the set of counters. Also provided is a computer readable storage device encoded with data for adapting a manufacturing facility to create the apparatus. | 01-24-2013 |
20130166814 | COMPUTER READABLE RECORDING MEDIUM HAVING STORED THEREIN INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD - A program for causing an information processing apparatus to execute a process of a virtual calculator, the process including judging, when a switching of a virtual address space being a processing target of a virtual calculation apparatus occurs, whether or not a there exits physical calculation apparatus in which cache information of a physical address space corresponding to a virtual address space of a switching destination is accumulated; selecting the physical calculation apparatus when there exists a physical calculation apparatus in which the cache information of the physical address space is accumulated, and selecting the physical calculation apparatus in which cache information itself is not accumulated when there exists no physical calculation apparatus in which the cache information is accumulated; and assigning the selected physical calculation apparatus to the virtual calculation apparatus in which the switching of the virtual address space being a processing target has occurred. | 06-27-2013 |
20130185473 | Method for Filtering Traffic to a Physically-Tagged Data Cache - Embodiments of a data cache are disclosed that substantially decrease a number of accesses to a physically-tagged tag array of the data cache are provided. In general, the data cache includes a data array that stores data elements, a physically-tagged tag array, and a virtually-tagged tag array. In one embodiment, the virtually-tagged tag array receives a virtual address. If there is a match for the virtual address in the virtually-tagged tag array, the virtually-tagged tag array outputs, to the data array, a way stored in the virtually-tagged tag array for the virtual address. In addition, in one embodiment, the virtually-tagged tag array disables the physically-tagged tag array. Using the way output by the virtually-tagged tag array, a desired data element in the data array is addressed. | 07-18-2013 |
20130262736 | MEMORY TYPES FOR CACHING POLICIES - The present system enables receiving a request from an I/O device to translate a virtual address to a physical address to access the page in system memory. One or more memory attributes of the page defining a cacheability characteristic of the page is identified. A response including the physical address and the cacheability characteristic of the page is sent to the I/O device. | 10-03-2013 |
20130275649 | Access Optimization Method for Main Memory Database Based on Page-Coloring - An access optimization method for a main memory database based on page-coloring is described. An access sequence of all data pages of a weak locality dataset is ordered by page-color, and all the data pages are grouped by page-color, and then all the data pages of the weak locality dataset are scanned in a sequence of page-color grouping. Further, a number of memory pages having the same page-color are preset as a page-color queue, in which the page-color queue serves as a memory cache before a memory page is loaded into a CPU cache; the data page of the weak locality dataset first enters the page-color queue in an asynchronous mode, and is then loaded into the CPU cache to complete data processing. Accordingly, cache conflicts between datasets with different data locality strengths can be effectively reduced. | 10-17-2013 |
20140006681 | MEMORY MANAGEMENT IN A VIRTUALIZATION ENVIRONMENT | 01-02-2014 |
20140082252 | Combined Two-Level Cache Directory - Responsive to receiving a logical address for a cache access, a mechanism looks up a first portion of the logical address in a local cache directory for a local cache. The local cache directory returns a set identifier for each set in the local cache directory. Each set identifier indicates a set within a higher level cache directory. The mechanism looks up a second portion of the logical address in the higher level cache directory and compares each absolute address value received from the higher level cache directory to an absolute address received from a translation look-aside buffer to generate a higher level cache hit signal. The mechanism compares the higher level cache hit signal to each set identifier to generate a local cache hit signal and responsive to the local cache hit signal indicating a local cache hit, accesses the local cache based on the local cache hit signal. | 03-20-2014 |
20140115225 | CACHE MANAGEMENT BASED ON PHYSICAL MEMORY DEVICE CHARACTERISTICS - A processor unit removes, responsive to obtaining a new address, an entry from a memory of a type of memory based on a comparison of a performance of the type of memory to different performances, each of the different performances associated with a number of other types of memory. | 04-24-2014 |
20140115226 | CACHE MANAGEMENT BASED ON PHYSICAL MEMORY DEVICE CHARACTERISTICS - A processor unit removes, responsive to obtaining a new address, an entry from a memory of a type of memory based on a comparison of a performance of the type of memory to different performances, each of the different performances associated with a number of other types of memory. | 04-24-2014 |
20140149632 | PREFETCHING ACROSS PAGE BOUNDARIES IN HIERARCHICALLY CACHED PROCESSORS - Processors and methods for preventing lower level prefetch units from stalling at page boundaries. An upper level prefetch unit closest to the processor core issues a preemptive request for a translation of the next page in a given prefetch stream. The upper level prefetch unit sends the translation to the lower level prefetch units prior to the lower level prefetch units reaching the end of the current page for the given prefetch stream. When the lower level prefetch units reach the boundary of the current page, instead of stopping, these prefetch units can continue to prefetch by jumping to the next physical page number provided in the translation. | 05-29-2014 |
20140189191 | APPARATUS AND METHOD FOR MEMORY-MAPPED REGISTER CACHING - A processor is described comprising: an architectural register file implemented as a combination of a register file cache and an architectural register region within a level 1 (L1) data cache, and a data location table (DLT) to store data indicating a location of each architectural register within the register file cache and/or the architectural register region within the L1 data cache. | 07-03-2014 |
20140189192 | APPARATUS AND METHOD FOR A MULTIPLE PAGE SIZE TRANSLATION LOOKASIDE BUFFER (TLB) - An apparatus and method for implementing a multiple page size translation lookaside buffer (TLB). For example, a method according to one embodiment comprises: reading a first group of bits and a second group of bits from a linear address; determining whether the linear address is associated with a large page size or a small page size; identifying a first cache set using the first group of bits if the linear address is associated with a first page size and identifying a second cache set using the second group of bits if the linear address is associated with a second page size; and identifying a first cache way if the linear address is associated with a first page size and identifying a second cache way if the linear address is associated with a second page size. | 07-03-2014 |
20140189193 | IMAGE FORMING APPARATUS AND METHOD OF TRANSLATING VIRTUAL MEMORY ADDRESS INTO PHYSICAL MEMORY ADDRESS - An image forming apparatus includes a function unit to perform functions of the image forming apparatus, and a control unit to control the function unit to perform the functions of the image forming apparatus. The control unit includes a processor core to operate in a virtual memory address, a main memory to operate in a physical memory address and store data used in the functions of the image forming apparatus, and a plurality of input/output (I/O) logics to operate in the virtual memory address and control at least one of the functions performed by the image forming apparatus. Each of the plurality of I/O logics translates the virtual memory address into the physical memory address corresponding to the virtual memory address and accesses the main memory. | 07-03-2014 |
20140237157 | SYSTEM AND METHOD FOR PROVIDING AN ADDRESS CACHE FOR MEMORY MAP LEARNING - A system for interfacing with a co-processor or input/output device is disclosed. According to one embodiment, the system provides a one-hot address cache comprising a plurality of one-hot addresses and a host interface to a host memory controller of a host system. Each one-hot address of the plurality of one-hot addresses has a bit width. The plurality of one-hot addresses is configured to store the data associated with a corresponding memory address in an address space of a memory system and provide the data to the host memory controller during a memory map learning process. The plurality of one-hot addresses comprises a zero address of the bit width and a plurality of non-zero addresses of the bit width, and each one-hot address of the plurality of non-zero addresses of the one-hot address cache has only one non-zero address bit of the bit width. | 08-21-2014 |
20140281115 | SHARED CACHE USED TO PROVIDE ZERO COPY MEMORY MAPPED DATABASE - A technique for concurrently accessing a data set includes initializing a shared cache with a column data store configured to store an expected data set in columns and creating a memory map for accessing the physical memory location in the shared cache. Other operations include mapping the applications' data access requests to the shared cache with the memory map. One advantage of the disclosed technique is that only one instance of the expected data set is stored in memory, so each application is not required to create additional instances of the expected data set in the applications memory address space. Therefore, larger expected data sets may be entirely stored in memory without limiting the number of applications running concurrently. | 09-18-2014 |
20140281116 | Method and Apparatus to Speed up the Load Access and Data Return Speed Path Using Early Lower Address Bits - A microprocessor implemented method for processing a load instruction is disclosed. The method comprises computing a virtual address corresponding to the load instruction. Next, it comprises performing a lookup of a set associative translation lookaside buffer (TLB) and a set associative data cache memory in parallel using early calculated lower address bits of the virtual address. Subsequently, it comprises retrieving a set of entries from the TLB corresponding to a first group of lower address bits transmitted to the TLB, wherein the set of entries comprise a plurality of virtual addresses and corresponding physical addresses. Further, it comprises finding a matching entry for the virtual address in the set of entries using upper bits of the virtual address, wherein the matching entry comprises a physical address corresponding to the virtual address. Finally, it comprises finding a matching entry in the data cache memory using the physical address. | 09-18-2014 |
20150052286 | RETRIEVAL HASH INDEX - Systems and methods are provided that facilitate retrieval of a hash index in an electronic device. The system contains an addressing component that generates a hash index as a function of an exclusive-or identity. The addressing component can retrieve the hash index as a function of a tag value. Accordingly, required storage area can be reduced and electronic devices can be more efficient. | 02-19-2015 |
20150067230 | SYSTEMS AND METHODS FOR FASTER READ AFTER WRITE FORWARDING USING A VIRTUAL ADDRESS - Methods for read after write forwarding using a virtual address are disclosed. A method includes determining when a virtual address has been remapped from corresponding to a first physical address to a second physical address and determining if all stores occupying a store queue before the remapping have been retired from the store queue. Loads that are younger than the stores that occupied the store queue before the remapping are prevented from being dispatched and executed until the stores that occupied the store queue before the remapping have left the store queue and become globally visible. | 03-05-2015 |
20150095545 | METHOD AND APPARATUS FOR CONTROLLING CACHE MEMORY - A method of controlling a cache memory includes receiving location information of one piece of data included in a data block and size information of the data block; mapping the data block onto cache memory by using the location information and the size information; and selecting at least one unit cache out of unit caches included in the cache memory based on the mapping result. | 04-02-2015 |
20150106545 | Computer Processor Employing Cache Memory Storing Backless Cache Lines - A computer processing system with a hierarchical memory system having at least one cache and physical memory, and a processor having execution logic that generates memory requests that are supplied to the hierarchical memory system. The at least one cache stores a plurality of cache lines including at least one backless cache line. | 04-16-2015 |
20150113199 | MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS - In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced. | 04-23-2015 |
20150113200 | MAINTAINING PROCESSOR RESOURCES DURING ARCHITECTURAL EVENTS - In one embodiment of the present invention, a method includes switching between a first address space and a second address space, determining if the second address space exists in a list of address spaces; and maintaining entries of the first address space in a translation buffer after the switching. In such manner, overhead associated with such a context switch may be reduced. | 04-23-2015 |
20150317249 | MEMORY ACCESS MONITOR - For each access request received at a shared cache of the data processing device, a memory access pattern (MAP) monitor predicts which of the memory banks, and corresponding row buffers, would be accessed by the access request if the requesting thread were the only thread executing at the data processing device. By recording predicted accesses over time for a number of access requests, the MAP monitor develops a pattern of predicted memory accesses by executing threads. The pattern can be employed to assign resources at the shared cache, thereby managing memory more efficiently. | 11-05-2015 |
20150324289 | Data Access System, Memory Sharing Device, and Data Reading Method - A control apparatus sends a data access request to a first memory sharing device, wherein the data access request includes an address of target data. The first memory sharing device determines that the target data is stored in a second memory sharing device according to the address of the target data and an address list. The address list includes corresponding relationships between addresses and memory sharing devices, and first addresses corresponding to the first memory sharing device are different from second addresses corresponding to the second memory sharing device, and forward the data access request to the second memory sharing device. The second memory sharing device obtains the target data based on the address of the target data, and sends the target data to the first memory sharing device. Then the first memory sharing device forwards the target data to the control apparatus. | 11-12-2015 |
20150339228 | MEMORY CONTROLLERS EMPLOYING MEMORY CAPACITY COMPRESSION, AND RELATED PROCESSOR-BASED SYSTEMS AND METHODS - Aspects disclosed herein include memory controllers employing memory capacity compression, and related processor-based systems and methods. In certain aspects, compressed memory controllers are employed that can provide memory capacity compression. In some aspects, a line-based memory capacity compression scheme can be employed where additional translation of a physical address (PA) to a physical buffer address is performed to allow compressed data in a system memory at the physical buffer address for efficient compressed data storage. A translation lookaside buffer (TLB) may also be employed to store TLB entries comprising PA tags corresponding to a physical buffer address in the system memory to more efficiently perform the translation of the PA to the physical buffer address in the system memory. In certain aspects, a line-based memory capacity compression scheme, a page-based memory capacity compression scheme, or a hybrid line-page-based memory capacity compression scheme can be employed. | 11-26-2015 |
20150347044 | SYNCHRONIZING UPDATES OF PAGE TABLE STATUS INDICATORS AND PERFORMING BULK OPERATIONS - A synchronization capability to synchronize updates to page tables by forcing updates in cached entries to be made visible in memory (i.e., in in-memory page table entries). A synchronization instruction is used that ensures after the instruction has completed that updates to the cached entries that occurred prior to the synchronization instruction are made visible in memory. Synchronization may be used to facilitate memory management operations, such as bulk operations used to change a large section of memory to read-only, operations to manage a free list of memory pages, and/or operations associated with terminating processes. | 12-03-2015 |
20150347306 | SYNCHRONIZING UPDATES OF PAGE TABLE STATUS INDICATORS IN A MULTIPROCESSING ENVIRONMENT - A synchronization capability to synchronize updates to page tables by forcing updates in cached entries to be made visible in memory (i.e., in in-memory page table entries). A synchronization instruction is used that ensures after the instruction has completed that updates to the cached entries that occurred prior to the synchronization instruction are made visible in memory. Synchronization may be used to facilitate memory management operations, such as bulk operations used to change a large section of memory to read-only, operations to manage a free list of memory pages, and/or operations associated with terminating processes. | 12-03-2015 |
20150347307 | CACHE ARCHITECTURE - The present disclosure includes apparatuses and methods for a cache architecture. An example apparatus that includes a cache architecture according to the present disclosure can include an array of memory cells configured to store multiple cache entries per page of memory cells; and sense circuitry configured to determine whether cache data corresponding to a request from a cache controller is located at a location in the array corresponding to the request, and return a response to the cache controller indicating whether cache data is located at the location in the array corresponding to the request. | 12-03-2015 |
20150347310 | Storage Controller and Method for Managing Metadata in a Cache Store - A cache controller coupled to a cache store supported by a solid-state memory element uses a metadata update process that reduces write amplification caused by writing both cache data and metadata to the solid-state memory element. The cache controller partitions the solid-state memory element to include a metadata portion, a host data or cache portion and a log portion. Host write requests that include “hot” data are processed and recorded by the cache controller. The cache controller maintains first and second maps. A log thread combines multiple metadata updates in a single log entry block. Pending metadata updates are checked to determine when a commit threshold is reached. Thereafter, the pending metadata updates are written to the solid-state memory element and the maps are updated. | 12-03-2015 |
20150356012 | DATA FLUSH OF GROUP TABLE - A group table includes one or more groups. A synch command including a synch address range is received. An order data of the one or more groups is flushed is determined by whether the synch address range is included in the one or more groups. | 12-10-2015 |
20150356014 | DYNAMICALLY ADJUSTING THE HARDWARE STREAM PREFETCHER PREFETCH AHEAD DISTANCE - An apparatus for prefetching data for a processor is presented. The apparatus may include a memory, a first counter, a second counter, and a control circuit. The memory may include a table with at least one entry in which the at least one entry may include an expected address of a next memory access and a next address from which to fetch data, wherein the next address is an offset value different from the expected address. The at least one entry may also include a maximum limit for the offset value. The first counter may increment responsive to an address of a memory access matching the expected address. The second counter may increment responsive to the address of the memory access resulting in a cache miss. The control circuitry may be configured to increment the maximum value of the offset value dependent upon a value of the second counter. | 12-10-2015 |
20150378891 | MANAGING READ TAGS IN A TRANSACTIONAL MEMORY - Managing cache evictions during transactional execution of a process. Based on initiating transactional execution of a memory data accessing instruction, memory data is fetched from a memory location, the memory data to be loaded as a new line into a cache entry of the cache. Based on determining that a threshold number of cache entries have been marked as read-set cache lines, determining whether a cache entry that is a read-set cache line can be replaced by identifying a cache entry that is a read-set cache line for the transaction that contains memory data from a memory address within a predetermined non-conflict address range. Then invalidating the identified cache entry of the transaction. Then loading the fetched memory data into the identified cache entry, and then marking the identified cache entry as a read-set cache line of the transaction. | 12-31-2015 |
20150378904 | ALLOCATING READ BLOCKS TO A THREAD IN A TRANSACTION USING USER SPECIFIED LOGICAL ADDRESSES - A processor in a multi-processor configuration is configured to execute an instruction that specifies a virtual address range to be monitored to protect reads in a transaction. The processor translates the virtual address range to a series of real pages. The real starting address and ending address pairs for each real page are stored for use later on to resolve a potential cross-interrogation (XI) conflict with a real address on the XI bus. | 12-31-2015 |
20160004457 | Buffered Automated Flash Controller Connected Directly to Processor Memory Bus - A mechanism is provided for buffer linking in a buffered solid state drive controller. Responsive to the buffered flash memory module receiving from a memory bus of a processor a memory command specifying a write operation, the mechanism initializes a first memory buffer in the buffered flash memory module. The mechanism associates the first memory buffer with an address of the write operation. The mechanism performs a compare operation to compare a previous and a next address with respect to an address associated with the first memory buffer with a plurality of buffers. The mechanism assigns a link tag to at least one buffer identified in the compare operation and the first memory buffer to form a linked buffer set. The mechanism writes to the first memory buffer based on the memory command. The mechanism builds at least one input/output command to persist contents of the linked buffer set and writes the contents of the linked buffer set to at least one solid state drive according to the at least one input/output command. | 01-07-2016 |
20160004644 | Storage Controller and Method for Managing Modified Data Flush Operations From a Cache - A storage controller maintaining a cache manages modified data flush operations. A set-associative map or relationship between individual cache lines in the cache and a corresponding portion of the host managed or source data store is generated in such a way that a quotient can be used to identify modified data in the cache in the order of the source data's logical block addresses. The storage controller uses a collision bitmap, a dirty bit map and a flush table when flushing data from the cache. The storage controller selects a quotient and identifies modified cache lines in the cache identified by the quotient. As long as the quotient remains the same, the storage controller flushes or transfers the modified cache lines to the data store. Otherwise, when the quotient is not the same, the data in the cache is skipped. A linked list is used to traverse skipped cache lines. | 01-07-2016 |
20160019157 | Method and Apparatus For Flexible Cache Partitioning By Sets And Ways Into Component Caches - Aspects include computing devices, systems, and methods for partitioning a system cache by sets and ways into component caches. A system cache memory controller may manage the component caches and manage access to the component caches. The system cache memory controller may receive system cache access requests specifying component cache identifiers, and match the component cache identifiers with records correlating traits of the component cache identifiers with in a component cache configuration table. The component cache traits may include a set shift trait, set offset trait, and target ways, which may define the locations of the component caches in the system cache. The system cache memory controller may also receive a physical address for the system cache in the system cache access request, determine an indexing mode for the component cache, and translate the physical address for the component cache. | 01-21-2016 |
20160041905 | Cache Line Compaction of Compressed Data Segments - Methods, devices, and non-transitory process-readable storage media for compacting data within cache lines of a cache. An aspect method may include identifying, by a processor of the computing device, a base address (e.g., a physical or virtual cache address) for a first data segment, identifying a data size (e.g., based on a compression ratio) for the first data segment, obtaining a base offset based on the identified data size and the base address of the first data segment, and calculating an offset address by offsetting the base address with the obtained base offset, wherein the calculated offset address is associated with a second data segment. In some aspects, the method may include identifying a parity value for the first data segment based on the base address and obtaining the base offset by performing a lookup on a stored table using the identified data size and identified parity value. | 02-11-2016 |
20160062906 | METHOD AND APPARATUS FOR ACCESSING DATA STORED IN A STORAGE SYSTEM THAT INCLUDES BOTH A FINAL LEVEL OF CACHE AND A MAIN MEMORY - A data access system including a storage device and a processor, which includes one or more levels of cache (LOC). In response to data required by the processor not being within the LOC, the processor generates a physical address to be accessed within the storage device in order to retrieve the data. The storage device includes a main memory and a cache module, which is configured as a final level of cache (FLOC) to be accessed by the processor prior to accessing the main memory. The cache module includes a controller that, in response to the data not being cached within the LOC, converts the physical address into a virtual address within the FLOC. The FLOC uses the virtual address to determine whether the data is within the FLOC. If the data is not within the FLOC, the cache module or the processor retrieves the data from the main memory. | 03-03-2016 |
20160077758 | SECURELY SHARING CACHED DATA - Various embodiments of a system and method for securely caching and sharing image data. A process can generate image data and store the image data into the protected cache using a UUID that is cryptographically derived from the image data. Any process with access to the UUID may retrieve the image data. Because the UUID is uniquely derived from the actual data of the generated file, a process will only be able to retrieve image data that could have been generated by a process associated with the user account, or from a process associated with a user account that could have generated the image data, or that otherwise has a record of the image data. | 03-17-2016 |
20160077973 | Cache Bank Spreading For Compression Algorithms - Aspects include computing devices, systems, and methods for implementing a cache memory access requests for compressed data using cache bank spreading. In an aspect, cache bank spreading may include determining whether the compressed data of the cache memory access fits on a single cache bank. In response to determining that the compressed data fits on a single cache bank, a cache bank spreading value may be calculated to replace/reinstate bank selection bits of the physical address for a cache memory of the cache memory access request that may be cleared during data compression. A cache bank spreading address in the physical space of the cache memory may include the physical address of the cache memory access request plus the reinstated bank selection bits. The cache bank spreading address may be used to read compressed data from or write compressed data to the cache memory device. | 03-17-2016 |
20160085672 | Cache Hashing - Cache logic generates a cache address from an input memory address that includes a first binary string and a second binary string. The cache logic includes a hashing engine configured to generate a third binary string from the first binary string and to form each bit of the third binary string by combining a respective subset of bits of the first binary string by a first bitwise operation, wherein the subsets of bits of the first binary string are defined at the hashing engine such that each subset is unique and comprises approximately half of the bits of the first binary string; and a combination unit arranged to combine the third binary string with the second binary string by a reversible operation so as to form a binary output string for use as at least part of a cache address in a cache memory. | 03-24-2016 |
20160092369 | Partner-Aware Virtual Microsectoring for Sectored Cache Architectures - Embodiments described include systems, apparatuses, and methods using sectored dynamic random access memory (DRAM) cache. An exemplary apparatus may include at least one hardware processor core and a sectored dynamic random access (DRAM) cache coupled to the at least one hardware processor core. | 03-31-2016 |
20160117255 | DEVICE HAVING A CACHE MEMORY - A device has a cache memory for temporarily storing contents of a buffer memory. The device has a mirror unit coupled between the cache memory and the buffer memory. The mirror unit is arranged for providing at least two buffer mirrors at respective different buffer mirror address ranges in the main address range by adapting the memory addressing. Due to the virtual mirrors data on a respective address in any of the respective different buffer mirror address ranges is the data of the buffer memory at a corresponding address in the buffer address range. The device enables processing of a subsequent set of data in the buffer memory via the cache memory without invalidating the cache by switching to a different buffer mirror. | 04-28-2016 |
20220138106 | SELECTIVELY PROCESSING STORAGE COMMANDS AT DIFFERENT GRANULARITIES BASED ON COMMAND TYPES - A method of operating a storage appliance is provided. The method includes (a) in response to the appliance receiving a first command to perform a first storage operation on a first plurality of blocks, storing a command record for each block of the first plurality in a cache, each command record respectively indicating an address of that block; (b) upon flushing the command record for each block of the first plurality from the cache to persistent storage, storing data of that block at its indicated address; (c) in response to the storage appliance receiving a second command to perform a second storage operation on a second plurality of blocks, storing, in the cache, an aggregated command record that indicates the second storage operation and an address range of the second plurality, the second storage operation representing an identical change to all blocks of the second plurality; and (d) upon flushing the aggregated command record from the cache to the persistent storage, performing the storage operation indicated by the aggregated command record over the address range indicated by the aggregated command record. | 05-05-2022 |
20220138109 | CONTINUOUS READ WITH MULTIPLE READ COMMANDS - A memory device includes a data register operatively coupled to the memory array, a cache operatively coupled to the data register, and an input/output interface operatively coupled to the cache. A controller executes a continuous page read operation to sequentially load pages to the data register and move the pages to the cache, in response to a page read command, executes the cache read operation in response to a cache read command to move data from the cache to the input/output interface, and to stall moving of the data from the cache until a next cache read command, and terminates the continuous page read operation in response to a terminate command. | 05-05-2022 |