Entries |
Document | Title | Date |
20080201553 | NON-VOLATILE MEMORY SYSTEM - This non-volatile memory system includes: a non-volatile memory; and a memory controller controlling read and write of the non-volatile memory. Access control of the non-volatile memory system is performed in accordance with a logical address, using an address translation table within the memory controller that is updated in association with data write and that indicates a correlation between logical addresses provided by a host and physical addresses of the non-volatile memory. The non-volatile memory system is also configured to be able to set a system configuration and function in relation to the host. | 08-21-2008 |
20080209160 | Device, System and Method of Verification of Address Translation Mechanisms - Device, system and method of verification of address translation mechanisms. For example, an apparatus for testing an address translation mechanism of a design-under-test, the apparatus including: a test generator to receive a specification of at least one address translation table, and to generate one or more constraint-satisfaction-problem projectors over a plurality of attributes of said address translation table. | 08-28-2008 |
20080229054 | METHOD FOR PERFORMING JUMP AND TRANSLATION STATE CHANGE AT THE SAME TIME - A method for performing a jump and translation state change procedure at the same time is disclosed. The method includes: carrying out a series of instruction processing in a first function in a first translation state; and executing a jump instruction which jumps to a target address in a second function and initiates and completes a translation state change to a second translation state at the same time; wherein an address of a next instruction after the jump instruction is stored as a return address in a first register. | 09-18-2008 |
20080235486 | Non-volatile memory devices, systems including same and associated methods - A memory device, system and method of editing a file in a non-volatile memory device is described. The memory device includes a controller and a memory array configured to copy an existing first file into a second file during editing and to maintain the first file while applying edits to the second file. When editing is completed, a first cluster pointer of the first file is redirected to point at the first cluster of the second file which has been edited. | 09-25-2008 |
20080256326 | Subsegmenting for efficient storage, resemblance determination, and transmission - Transmitting or storing subsegments is disclosed. A data stream or a data block is received and broken into a plurality of segments. For at least one segment, the segment is broken into a plurality of subsegments. A previously stored or transmitted segment similar to the at least one segment is identified. A fingerprint is computed for at least one subsegment. And, using the fingerprint for the at least one subsegment, determining whether the at least one subsegment is identical to a subsegment of the previously stored or transmitted segment without directly comparing the content of the at leas one subsegment with the content of the subsegment of the previously stored or transmitted segment. | 10-16-2008 |
20080263314 | ADDRESS TRANSLATION APPARATUS WHICH IS CAPABLE OF EASILY PERFORMING ADDRESS TRANSLATION AND PROCESSOR SYSTEM - An address translation apparatus includes first to third retention units, a comparison unit, and a translation unit. The first retention unit retains a multi-bit first address. The second retention unit retains a multi-bit second address different from the first address. The third retention unit retains first information indicating which bit is a translation target in the multi bits of the first address. The comparison unit compares a multi-bit third address input from outside and the first address. The translation unit translates the bit indicated by the first information in the multi bits of the third address to obtain a fourth address such that the bit indicated by the first information coincides with the second address, when the third address coincides with the first address based on comparison result of the comparison unit. | 10-23-2008 |
20080270739 | Management of copy-on-write fault - An embodiment of the invention provides an apparatus and method for management of copy-on-write faults. The apparatus and method include the acts of: assigning a translation to a first physical memory page, where the translation is a virtual memory address to physical memory address translation and where an offset portion in the translation includes a physical address of the first physical memory page; and creating a second physical memory page which is a copy of the first physical memory page. | 10-30-2008 |
20080270740 | Full-system ISA Emulating System and Process Recognition Method - Disclosed is a method of recognizing a process in a full-system Industry Standard Architecture (ISA) emulator, comprising the steps of: recognizing a process based on a base address of a page table thereof, recognizing the switch between the processes when said base address of the page table has changed, recognizing the termination of a recorded process when the base address of the page table of the process which tries to modify the page table is not equal to the base address of the page table of the recorded process in the page table. With the recognized process, the binary translation results indexed based on content can be saved into a corresponding process repository, thereby achieving the permanent saving of the translation results and the reuse of translation and optimization on the basis of a previously executed program. Consequently, the overall performance of the full-system Industry Standard Architecture emulator is enhanced. | 10-30-2008 |
20080270741 | STRUCTURE FOR PROGRAM DIRECTED MEMORY ACCESS PATTERNS - Design structures for program directed memory access patterns. A design structure is embodied in a machine readable medium used in a design process, the design structure including a computer memory system for storing and retrieving data. The memory system includes a memory, a memory controller and a virtual memory management system. The memory includes a plurality of memory devices organized into one or more physical groups accessible via associated busses for transferring data and control information. The memory controller receives and responds to memory access requests that contain application access information to control access pattern and data organization within the memory. Responding to memory access request includes accessing one or more memory devices. The virtual memory management system includes: a plurality of page table entries for mapping virtual memory addresses to real addresses in the memory; a hint state responsive to application access information for indicating how real memory for associated pages is to be physically organized within the memory; and a means for conveying the hint state to the memory controller. | 10-30-2008 |
20080276067 | Method and Apparatus for Page Table Pre-Fetching in Zero Frame Display Channel - A method for a graphics processing unit (“GPU”) to maintain a local cache to minimize system memory reads is provided. A display read request and a logical address are received. The GPU determines whether a local cache contains a physical address corresponding to the logical address. If not, a cache fetch command is generated, and a number of cache lines is retrieved from a table, which may be a GART table, in the system memory. The logical address is converted to a corresponding physical address of the memory when the cache lines are retrieved from the table so that data in memory may be accessed by the GPU. When a cache line in the local cache is consumed, a next line cache fetch request is generated to retrieve a next cache line from the table so that the local cache maintains a predetermined amount of cache lines. | 11-06-2008 |
20080282054 | SEMICONDUCTOR DEVICE HAVING MEMORY ACCESS MECHANISM WITH ADDRESS-TRANSLATING FUNCTION - A pseudo-physical address is used for accessing a memory from a CPU (Central Processing Unit). One of function blocks that is needed for the current application program is selected based on the pseudo-physical address, and the pseudo-physical address is translated to a real physical address by the selected function block. There are provided parallel lines of memory access functions extending from the CPU, whereby it is possible to perform an optimal memory access transaction for each application program, and it is possible to improve the memory access performance without lowering the operation frequency and without increasing the number of cycles required for a memory access. | 11-13-2008 |
20080288743 | APPARATUS AND METHOD OF MANAGING MAPPING TABLE OF NON-VOLATILE MEMORY - An apparatus and method for managing a mapping table of a non-volatile memory are provided. The apparatus includes a non-volatile memory having memory cells, each of which stores data bits in a plurality of pages included in a block according to a plurality of states, each of which has at least two bits, an operating time measuring unit measuring a write operation time on each of the plurality of pages included in the block, and a mapping table generating unit dividing the pages into a plurality of groups according to the measured write operation time and generating a mapping table by using the divided groups. | 11-20-2008 |
20080301398 | LINEAR TO PHYSICAL ADDRESS TRANSLATION WITH SUPPORT FOR PAGE ATTRIBUTES - Embodiments of the invention are generally directed to systems, methods, and apparatuses for linear to physical address translation with support for page attributes. In some embodiments, a system receives an instruction to translate a memory pointer to a physical memory address for a memory location. The system may return the physical memory address and one or more page attributes. Other embodiments are described and claimed. | 12-04-2008 |
20090024824 | PROCESSING SYSTEM HAVING A SUPPORTED PAGE SIZE INFORMATION REGISTER - A processing system includes initialization software that is executable by a processor to identify one or more memory page sizes supported by the processing system. The supported memory page sizes that are identified by the initialization software are stored in one or more memory page size identification registers. Individual bits of the one or more memory page size identification registers may be respectively associated with a memory page size. Whether a memory page size is supported by the processing system may be determined by checking the logic state of the individual bit corresponding to the memory page size. | 01-22-2009 |
20090024825 | Real Time Paged Computing Device and Method of Operation - A component of a computing device, such as the kernel of an operating system, is arranged to identify real time processes running on the device and transparently lock the memory owned by such processes to avoid them being paged out. The kernel is also able to inspect all inter-process communications originated by the real time threads running in such processes, in order to ascertain what other processes they invoke, and, if they have the potential to block a real time operation, the kernel is arranged to lock the areas of memory these processes reference. This procedure operates recursively, and ensures that page faults which might affect the operation of any real time process do not occur. | 01-22-2009 |
20090049271 | Consolidation of matching memory pages - A method and apparatus for managing memory allocation using memory pages. A first physical memory page is compared with a second physical memory page, wherein the first physical memory page is associated with a first page table and the second physical memory page is associated with a second page table. If the second physical memory page matches the first physical memory page, the second physical memory page is deallocated, and the second page table is associated with the first physical memory page. | 02-19-2009 |
20090077343 | STORAGE APPARATUS HAVING VIRTUAL-TO-ACTUAL DEVICE ADDRESSING SCHEME - A storage apparatus includes a storage unit and a controller, wherein control of inputting/outputting data from/to a device provided in said storage unit is executed in accordance with a request received by said storage apparatus. An actual device of the storage apparatus corresponds to a virtual device which is external to said storage apparatus. The controller operates to perform a process for mapping an actual device address corresponding to a virtual device address, in accordance with a specification of the actual device to be mounted or unmounted to correspond to the virtual device, and storing and retaining mapping information obtained from the mapping in a first table. The controller also performs data input/output process for receiving, an access request for data input/output in which said virtual device address is specified, obtaining the actual device address mapped to said specified virtual device address in said first table, and accessing the actual device by said obtained actual device address. | 03-19-2009 |
20090100245 | Device and method for memory addressing - An addressing device and method is provided to enable an electronic system having a less addressing capability to address a memory device having a larger storage space, thereby reducing the manufacture cost of the electronic system. The addressing device includes an address decoder and an address translator. The address decoder receives a first access address belonging to a smaller address space, and determines whether to map the first access address to the larger storage space of the memory device. The address translator is coupled to the address decoder. When the first access address is mapped to the storage space of the memory device, the address translator translates the first access address into a second access address of the larger storage space according to an adjustable base address. | 04-16-2009 |
20090106524 | METHODS FOR ACCESSING MULTIPLE PAGE TABLES IN A COMPUTER SYSTEM - A virtual memory system implementing the invention provides concurrent access to translations for virtual addresses from multiple address spaces. One embodiment of the invention is implemented in a virtual computer system, in which a virtual machine monitor supports a virtual machine. In this embodiment, the invention provides concurrent access to translations for virtual addresses from the respective address spaces of both the virtual machine monitor and the virtual machine. Multiple page tables contain the translations for the multiple address spaces. Information about an operating state of the computer system, as well as an address space identifier, are used to determine whether, and under what circumstances, an attempted memory access is permissible. If the attempted memory access is permissible, the address space identifier is also used to determine which of the multiple page tables contains the translation for the attempted memory access. | 04-23-2009 |
20090158003 | STRUCTURE FOR A MEMORY-CENTRIC PAGE TABLE WALKER - A design structure embodied in a machine readable storage medium for at least one of designing, manufacturing, and testing a design is provided. The design structure includes a page table walker. The page table walker is moved from its conventional location in the memory management unit associated with the data processor to a location in main memory i.e. the main memory controller. As a result, wherein the processing of requests for data could selectively avoid or bypass cumbersome caches associated with the data processor. | 06-18-2009 |
20090164749 | COUPLED SYMBIOTIC OPERATING SYSTEMS - A single application can be executed across multiple execution environments in an efficient manner if at least a relevant portion of the virtual memory assigned to the application was equally accessible by each of the multiple execution environments. A request by a process in one execution environment can, thereby, be directed to an operating system, or other core software, in another execution environment and can be made by a shadow of the requesting process in the same manner as the original request was made by the requesting process itself. Because of the memory invariance between the execution environments, the results of the request will be equally accessible to the original requesting process even though the underlying software that responded to the request may be executing in a different execution environment. A similar thread invariance can be maintained to provide for accurate translation of requests between execution environments. | 06-25-2009 |
20090172341 | USING A MEMORY ADDRESS TRANSLATION STRUCTURE TO MANAGE PROTECTED MICRO-CONTEXTS - Embodiments of an invention for using a memory address translation structure to manage protected micro-contexts are disclosed. In one embodiment, an apparatus includes an interface and memory management logic. The interface is to perform a transaction to fetch information from a memory. The memory management logic is to translate an untranslated address to a memory address. The memory management logic includes a storage location, a series of translation stages, and determination logic. The storage location is to store an address of a data structure for the first translation stage. Each of the translation stages includes translation logic to find an entry in a data structure based on a portion of the untranslated address. Each entry is to store an address of a different data structure for the first translation stage, an address of a data structure for a successive translation stage, or the physical address. The determination logic is to determine whether an entry is storing an address of a different data structure for the first translation stage. | 07-02-2009 |
20090172342 | ROBUST INDEX STORAGE FOR NON-VOLATILE MEMORY - A non-volatile memory data address translation scheme is described that utilizes a hierarchal address translation system that is stored in the non-volatile memory itself. Embodiments of the present invention utilize a hierarchal address data and translation system wherein the address translation data entries are stored in one or more data structures/tables in the hierarchy, one or more of which can be updated in-place multiple times without having to overwrite data. This hierarchal address translation data structure and multiple update of data entries in the individual tables/data structures allow the hierarchal address translation data structure to be efficiently stored in a non-volatile memory array without markedly inducing write fatigue or adversely affecting the lifetime of the part. The hierarchal address translation of embodiments of the present invention also allow for an address translation layer that does not have to be resident in system RAM for operation. | 07-02-2009 |
20090182971 | DYNAMIC ADDRESS TRANSLATION WITH FETCH PROTECTION - What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated is first obtained and an initial origin address of a translation table of the hierarchy of translation tables is obtained. Based on the obtained initial origin, a segment table entry is obtained. The segment table entry is configured to contain a format control and access validity fields. If the format control and access validity fields are enabled, the segment table entry further contains an access control field, a fetch protection field, and a segment-frame absolute address. Store operations are permitted only if the access control field matches a program access key provided by any one of a Program Status Word or an operand of a program instruction being executed. Fetch operations are permitted if the program access key associated with the virtual address is equal to the segment access control field. | 07-16-2009 |
20090182972 | DYNAMIC ADDRESS TRANSLATION WITH FORMAT CONTROL - What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated and an initial origin address of a translation table of the hierarchy of translation tables are obtained. An index portion of the virtual address is used to reference an entry in the translation table. If a format control field contained in the translation table entry is enabled, the table entry contains a frame address of a large block of data of at least 1M byte in size. The frame address is then combined with an offset portion of the virtual address to form the translated address of a small 4K byte block of data in main storage or memory. | 07-16-2009 |
20090187728 | DYNAMIC ADDRESS TRANSLATION WITH CHANGE RECORDING OVERRIDE - What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated and an initial origin address of a translation table of the hierarchy of translation tables are obtained. A segment table entry obtained from a segment table contains a format control field. If the format control field is enabled, a segment-frame absolute address of a large block of data in main storage is obtained from the segment table entry. Each 4K byte block of data within the large block has an associated storage key. Store operations associated with the virtual address are performed to the desired block of data. If the change recording override field is disabled, the change bit of the storage key associated with the desired 4K byte block is set to 1. An indication is then provided that the desired 4K byte block has been modified. | 07-23-2009 |
20090187729 | Separate Page Table Base Address for Minivisor - In one embodiment, a processor supports an alternate address space during execution of non-guest code (such as a minivisor or a virtual machine monitor (VMM)). The alternate address space may be the guest address space. An instruction in the minivisor/VMM may specify the alternate address space for a data access, permitting the minivisor/VMM to read guest memory state via the alternate address space. In another embodiment, a processor may implement a page table base address register dedicated for the minivisor's use. In still another embodiment, the minivisor may be implemented as a specified entry point in the VMM address space. | 07-23-2009 |
20090198951 | Full Virtualization of Resources Across an IP Interconnect - An addressing model is provided where all resources, including memory and devices, are addressed with internet protocol (IP) addresses. A task, such as an application, may be assigned a range of IP addresses rather than an effective address range. Thus, a processing element, such as an I/O adapter or even a printer, for example, may also be addressed using IP addresses without the need for library calls, device drivers, pinning memory, and so forth. This addressing model also provides full virtualization of resources across an IP interconnect, allowing a process to access an I/O device across a network. | 08-06-2009 |
20090198952 | Memory Mapping Architecture - Memory mapping techniques for non-volatile memory are disclosed where logical sectors are mapped into physical pages using data structures in volatile and non-volatile memory. In some implementations, a first lookup table in non-volatile memory maps logical sectors directly into physical pages. A second lookup table in volatile memory holds the physical address of the first lookup table in non-volatile memory. In some implementations, a cache in volatile memory holds the physical addresses of the most recently written logical sectors. Also disclosed is a block TOC describing block content which can be used for garbage collection and restore operations. | 08-06-2009 |
20090216992 | DYNAMIC ADDRESS TRANSLATION WITH TRANSLATION EXCEPTION QUALIFIER - What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated and an initial origin address of a translation table of the hierarchy of translation tables are obtained. Dynamic address translation of the virtual address proceeds. In response to a translation interruption having occurred during dynamic address translation, bits are stored in a translation exception qualifier (TXQ) field to indicate that the exception was either a host DAT exception having occurred while running a host program or a host DAT exception having occurred while running a guest program. The TXQ is further capable of indicating that the exception was associated with a host virtual address derived from a guest page frame real address or a guest segment frame absolute address. The TXQ is further capable of indicating that a larger or smaller host frame size is preferred to back a guest frame. | 08-27-2009 |
20090216993 | System and Method of Data Forwarding Within An Execution Unit - In an embodiment, a method is disclosed that includes, comparing, during a write back stage at an execution unit, a write identifier associated with a result to be written to a register file from execution of a first instruction to a read identifier associated with a second instruction at an execution pipeline within an interleaved multi-threaded (IMT) processor having multiple execution units. When the write identifier matches the read identifier, the method further includes storing the result at a local memory of the execution unit for use by the execution unit in the subsequent read stage. | 08-27-2009 |
20090222642 | MECHANISM FOR VISUALIZING MEMORY FRAGMENTATION - A method, system and computer program product for visualizing memory fragmentation in a data processing system includes determining a mobility status of plural memory pages and generating a map display depicting the plural memory pages and the mobility status. | 09-03-2009 |
20090249022 | METHOD FOR ACHIEVING SEQUENTIAL I/O PERFORMANCE FROM A RANDOM WORKLOAD - Some embodiments of the present invention provide methods, computer media encoding instructions, and systems for receiving write requests directed to non-sequential logical block addresses and writing the write requests to sequential disk block addresses in a storage system. Some embodiments further include overprovisioning a storage system to include an increment of additional storage space such that it is more likely a large enough sequential block of storage will be available to accommodate incoming write requests. | 10-01-2009 |
20090271590 | METHOD AND SYSTEM FOR LATENCY OPTIMIZED ATS USAGE - Methods and systems for latency optimized ATS usage are disclosed. Aspects of one method may include communicating a memory access request using an untranslated address and also an address translation request using the same untranslated address, where the translation request may be sent without waiting for a result of the memory access request. The memory access request and the address translation request may be made in either order. A translation agent may be used to translate the untranslated address, and the translated address may be communicated to the device that made the memory access request. The translated address may also be used to make the memory access. Accordingly, by communicating the translated address without having to wait for completion of the memory access, or vice versa, the requesting device may reduce latency for memory accesses when using untranslated addresses. | 10-29-2009 |
20090276604 | ASSIGNING MEMORY FOR ADDRESS TYPES - Various example implementations are disclosed. According to one example, an integrated circuit may include a key extractor, a translation table block, and a memory assigner. The key extractor may be configured to receive data, extract key-related information from the data, and send the key-related information to a first memory device. The translation table block may be configured to update a mapping table based on a memory assigner assigning physical portions of the first memory device to each of a plurality of address types, receive an index from the first memory device in response to the key extractor sending the key-related information to the first memory device, and send a data request to a second memory device based on the received index, the data request identifying a physical portion of the second memory device. | 11-05-2009 |
20090287901 | SYSTEM AND METHOD FOR CONTENT REPLICATION DETECTION AND ELIMINATION IN MAIN MEMORY - A system and method for effectively increasing the amount of data that can be stored in the main memory of a computer, particularly, by a hardware enhancement of a memory controller apparatus that detects duplicate memory contents and eliminates duplicate memory contents wherein the duplication and elimination are performed by hardware without imposing any penalty on the overall performance of the system. | 11-19-2009 |
20090287902 | DISTRIBUTED COMPUTING SYSTEM WITH UNIVERSAL ADDRESS SYSTEM AND METHOD - A distributed computing system that incorporates enhanced distributed storage and a universal address system and method are provided. | 11-19-2009 |
20090300318 | ADDRESS CACHING STORED TRANSLATION - Systems and/or methods that facilitate logical block address (LBA) to physical block address (PBA) translations associated with a memory component(s) are presented. The disclosed subject matter employs an optimized block address (BA) component that can facilitate caching the LBA to PBA translations within a memory controller component based in part on a predetermined optimization criteria to facilitate improving the access of data associated with the memory component. The predetermined optimization criteria can relate to a length of time since an LBA has been accessed, a number of times the LBA has been access, a data size of data related to an LBA, and/or other factors. The LBA to PBA translations can be utilized to facilitate accessing the LBA and/or associated data using the cached translation, instead of performing various functions to determine the translation. | 12-03-2009 |
20090307462 | MARK PAGE-OUT PAGES AS CRITICAL FOR COOPERATIVE MEMORY OVER-COMMITMENT - Disclosed is a computer implemented method and apparatus for marking as critical a virtual memory page in a data processing system. An operating system indicates to a virtual memory manager a virtual memory page selected for paging-out to disk. The operating system determines that the data processing system is using a cooperative memory over-commitment. The operating system, responsive to a determination that the data processing system is using cooperative memory over-commitment, marks the virtual memory page as critical, such that the virtual memory page remains in physical memory. The operating system, responsive to marking the virtual memory page as critical, sets the virtual memory page to a page-out state. | 12-10-2009 |
20090327645 | SWITCH, INFORMATION PROCESSING APPARATUS, AND ADDRESS TRANSLATION METHOD - A switch connects and disconnects an input and output control device to and from an input and output device. The switch includes a storage unit that stores therein a translation table for use in translating a physical address used on a virtual machine that a guest operating system specifies as a direct memory access transfer destination to the input and output device, into a physical address used on a real machine; and an address translating unit that translates an address contained in a direct memory access request issued by the input and output device into a physical address used on the real machine by referring to the translation table. | 12-31-2009 |
20100005270 | STORAGE UNIT MANAGEMENT METHODS AND SYSTEMS - Storage unit management methods and systems are provided. The storage unit comprises a plurality of physical blocks, wherein each has one of a plurality of block type definitions. First, a sub-write command is obtained, wherein the sub-write command requests to write data to at least one logical page of a logical block. It is determined whether a candidate block having a first block type definition exists in the storage unit, wherein the logical page of the logic block cannot map to the candidate block based on the first block type definition. If the candidate block exists, the block type definition of the candidate block is transformed from the first block type definition to a second block type definition. Data is written to a specific page of the candidate block, and a mapping relationship between the logical page of the logical block and the specific page of the candidate block is recorded. | 01-07-2010 |
20100011186 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 01-14-2010 |
20100011187 | Performance enhancement of address translation using translation tables covering large address spaces - An embodiment of the present invention is a technique to enhance address translation performance. A register stores capability indicators to indicate capability supported by a circuit in a chipset for address translation of a guest physical address to a host physical address. A plurality of multi-level page tables is used for page walking in the address translation. Each of the page tables has page table entries. Each of the page table entries has at least an entry specifier corresponding to the capability indicated by the capability indicators. | 01-14-2010 |
20100030997 | VIRTUAL MEMORY MANAGEMENT - A method for managing a virtual memory system configured to allow multiple page sizes is described. Each page size has at least one table associated with it. The method involves maintaining entries in the tables to keep track of the page size for which the effective address is mapped. When a new effective address to physical address mapping needs to be made for a page size, the method accesses the appropriate tables to identify prior mappings for another page size in the same segment. If no such conflicting mapping exists, it creates a new mapping in the appropriate table. A formula is used to generate an index to access a mapping in a table. | 02-04-2010 |
20100030998 | Memory Management Using Transparent Page Transformation - Memory space is managed to release storage area occupied by pages similar to stored reference pages. The memory is examined to find two similar pages, and a transformation is obtained. The transformation enables reconstructing one page from the other. The transformation is then stored and one of the pages is discarded to release its memory space. When the discarded page is needed, the remaining page is fetched, and the transformation is applied to the page to regenerate the discarded page. | 02-04-2010 |
20100030999 | Process and Method for Logical-to-Physical Address Mapping in Solid Sate Disks - An embodiment of the invention relates to a mass storage device including a nonvolatile memory device with a plurality of memory management blocks and an address translation table formed with pointers to locations of the memory management blocks. A volatile memory device is included with an address index table formed with pointers to the pointers to the locations of the memory management blocks. The address index table is stored in the nonvolatile memory upon loss of bias voltage. Changes to the address translation table are accumulated in the volatile memory and written to the address translation table when at least a minimum quantity of the changes has been accumulated. The changes to the logical block address translation table accumulated in the volatile memory are written to a page in the address translation table after prior data in the page has been updated, written to another page, and then erased. | 02-04-2010 |
20100070735 | EMBEDDED MAPPING INFORMATION FOR MEMORY DEVICES - Memory modules and methods of operating memory modules embed mapping information within blocks of memory cells to which the mapping information pertains. In particular, when a page is written for a logical data block, that page includes a snapshot of the current mapping information for that logical data block. In this manner, the last valid physical page of a logical data block will contain a physical/logical mapping of that block. Thus, instead of scanning every valid page of the memory device to rebuild the mapping information, the memory module may scan only for the last valid physical page of each logical data block. Once the last valid physical page is discovered for a logical data block, the latest mapping information for that logical data block may be read from that page. | 03-18-2010 |
20100082937 | DATA GENERATING DEVICE, SCANNER AND COMPUTER PROGRAM - A data generating device may comprise a data identifying unit, a number identifying unit and a hyperlink structuring unit. The data identifying unit may be configured to identify the data of the contents table page and/or the index page included in a set of data including data of a contents table page and/or an index page, and data of a plurality of normal pages, data of each normal page including a page number. The number identifying unit may be configured to identify a number included in the data of the contents table page and/or the index page identified by the data identifying unit, and to identify a specific position at which the identified number is located in the data of the contents table page and/or the index page. The hyperlink structuring unit may be configured to generate data of hyperlink structure from the set of data by generating a hyperlink, at a position corresponding to the specific position of the number identified by the number identifying unit, that links to data of a normal page including a page number that coincides with the number identified by the number identifying unit. | 04-01-2010 |
20100095084 | TRANSLATION LAYER IN A SOLID STATE STORAGE DEVICE - Solid state storage devices and methods for flash translation layers are disclosed. In one such translation layer, a sector indication is translated to a memory location by a parallel unit look-up table is populated by memory device enumeration at initialization. Each table entry is comprised of communication channel, chip enable, logical unit, and plane for each operating memory device found. When the sector indication is received, a modulo function operates on entries of the look-up table in order to determine the memory location associated with the sector indication. | 04-15-2010 |
20100100700 | ADAPTIVELY PREVENTING OUT OF MEMORY CONDITIONS - A computer-implemented method of preventing an out-of-memory condition can include evaluating usage of virtual memory of a process executing within a computer, detecting a low memory condition in the virtual memory for the process, and selecting at least one functional program component of the process according to a component selection technique. The method also can include sending a notification to each selected functional program component and, responsive to receiving the notification, each selected functional program component releasing at least a portion of a range of virtual memory reserved on behalf of the selected functional program component. | 04-22-2010 |
20100100701 | OPTIMIZING DEFRAGMENTATION OPERATIONS IN A DIFFERENTIAL SNAPSHOTTER - A method for establishing and maintaining a differential snapshot of a set of files stored on a volume is disclosed. The invention achieves processing time and disk space optimizations by avoiding copy-on-write operations for logically insignificant moves of blocks, such as the block rearrangements characteristic of defragmentation utilities. A file system enhancement enabling the passing of a block copy command from the file system to lower-level drivers, is used to inform the snapshotter that a block move operation is not logically meaningful. When the logically insignificant move is of a block whose data forms part of the data captured in the snapshot virtual volume, and when the move is to a block location that is functioning as logical free space, the snapshotter can simply modify its block bitmap and update translation table entries without needing to perform a copy-on-write. | 04-22-2010 |
20100153682 | MIXED TECHNIQUES FOR HTML CROSSTAB RENDERING - What is described is a method and system for rendering HTML tables and crosstabs when insufficient data is available about the structure of the tables and data elements are positioned relative to the top-left corner of the table and not their first container, which is table data (TD). | 06-17-2010 |
20100161934 | PRESELECT LIST USING HIDDEN PAGES - Disclosed is a computer implemented method, computer program product, and apparatus for maintaining a preselect list. The method comprises software components detecting a page fault of a memory page. In response to detecting a page fault, the software components determine whether the memory page is referenced in the preselect list and unhide the memory page. Upon determining whether the memory page is referenced in the preselect list, the software components remove an entry of the preselect list corresponding to the memory page to form at least one removed candidate page and skip paging-out of the at least one removed candidate page. | 06-24-2010 |
20100180098 | CONFIGURABLE DECODER WITH APPLICATIONS IN FPGAS - The invention relates to hardware decoders that efficiently expand a small number of input bits to a large number of output bits, while providing considerable flexibility in selecting the output instances. One main area of application of the invention is in pin-limited environments, such as field programmable gates array (FPGA) used with dynamic reconfiguration. The invention includes a mapping unit that is a circuit, possibly in combination with a reconfigurable memory device. The circuit has as input a z-bit source word having a value at each bit position and it outputs an n-bit output word, where n>z, where the value of each bit position of the n-bit output word is based upon the value of a pre-selected hardwired one of the bit positions in the x-bit word, where the said pre-selected hardwired bit positions is selected by a selector address. The invention may include a second reconfigurable memory device that outputs the z-bit source word, based upon an x-bit source address input to the second memory device, where x07-15-2010 | |
20100185830 | LOGICAL ADDRESS OFFSET - The present disclosure includes methods, devices, and systems for a logical address offset. One method embodiment includes detecting a memory unit formatting operation. Subsequently, in response to detecting the formatting operation, the method includes inspecting format information on the memory unit, calculating a logical address offset, and applying the offset to a host logical address. | 07-22-2010 |
20100217950 | COMPUTER APPARATUS AND CONTROL METHOD - A computer system with a physical computer having a physical processor, physical memory, virtual computer and virtual computer controller is disclosed. The virtual computer has its own processor and memory, which are virtual components that are provided by logically dividing the physical processor and memory, respectively. The virtual computer also has a page table storing a physical/virtual memory address correspondence relationship, and a protection object table for address management of a protected address space in the virtual memory. The controller includes a protection exception processing unit, protection exception save region, virtual/physical memory address converter, and instruction analyzer. Upon execution of protection exception processing, the controller compares an instruction address at which was generated the protection exception processing to an instruction address of protection exception information saved. If these are identical, a pseudo-instruction is used to execute the protection exception processing, thereby reducing the total processing amount required. | 08-26-2010 |
20100217951 | R and C Bit Update Handling - In one embodiment, a processor comprises a memory management unit (MMU) and an interface unit coupled to the MMU and to an interface unit of the processor. The MMU comprises a queue configured to store pending hardware-generated page table entry (PTE) updates. The interface unit is configured to receive a synchronization operation on the interface that is defined to cause the pending hardware-generated PTE updates, if any, to be written to memory. The MMU is configured to accept a subsequent hardware-generated PTE update generated subsequent to receiving the synchronization operation even if the synchronization operation has not completed on the interface. In some embodiments, the MMU may accept the subsequent PTE update responsive to transmitting the pending PTE updates from the queue. In other embodiments, the pending PTE updates may be identified in the queue and subsequent updates may be received. | 08-26-2010 |
20100228943 | ACCESS MANAGEMENT TECHNIQUE FOR STORAGE-EFFICIENT MAPPING BETWEEN IDENTIFIER DOMAINS - Access management techniques have been developed to specify and facilitate mappings between I/O and host domains in ways that are storage-efficient and which can provide flexibility in the form, granularity and/or extent of mappings, attributes and access controls coded relative to a particular I/O domain. Indeed, different identifier and/or operation translation models may be employed on a per logical device (or even a per sub-window) basis. In general, the flexibility and efficiency afforded using some embodiments of the present invention can be desirable, particularly as numbers of I/O domains increase, such as in the case of virtualization system implementations in which a multiplicity of logical I/O devices may be represented using underlying physical resources. | 09-09-2010 |
20100262804 | Effective Memory Clustering to Minimize Page Fault and Optimize Memory Utilization - An embodiment of the invention provides a method for effective memory clustering to minimize page faults and optimize memory utilization. More specifically, the method monitors data access requests to secondary storage and identifies data addresses in secondary storage having similar properties. Multi-dimensional clusters are created based on the monitoring to group the data addresses having similar properties. A memory page is created from a multi-dimensional cluster, wherein a cross-sectional partition is created (sliced) from the multi-dimensional cluster. The method receives a request for a data object in secondary storage and identifies a data address corresponding to the requested data object. The data address is mapped to the multi-dimensional cluster and/or the memory page; and, the memory page is transferred to a data cache in primary storage. | 10-14-2010 |
20100312985 | Mechanism for a Lockless Ring Buffer in Overwrite Mode - In one embodiment, a mechanism for a lockless ring buffer in overwrite mode is disclosed. In one embodiment, a method for implementing a lockless ring buffer in overwrite mode includes aligning memory addresses for each page of a ring buffer to form maskable bits in the address to be used as a state flag for the page and utilizing at least a two least significant bits of each of the addresses to represent the state flag associated with the page represented by the address, wherein the state flag indicates one of three states including a header state, an update state, and a normal state. The method further includes combining a movement of a head page pointer to a head page of the ring buffer with a swapping of the head page and a reader page, the combining comprising updating the state flag of the head page pointer to the normal state and updating the state flag of a pointer to the page after the head page to the header state, and moving the head page and a tail page of the ring buffer, the moving comprising updating the state flags of one or more pointers in the ring buffer associated with the head page and the tail page. | 12-09-2010 |
20110016289 | Apparatus and Method for Profiling Software Performance on a Processor with Non-Unique Virtual Addresses - A system includes a processor with a memory map specifying a user mode region with virtual address translation by a memory management unit and a kernel mode region with direct virtual address translation. The processor executes an application in the user mode region where virtual addresses are not unique. A probe receives trace information from the processor. A host system receives the trace information from the probe. The host system includes a data structure associating a process name, a process identification and a set of instruction counters. Each instruction counter is incremented upon the processing of a designated virtual address within the trace information. A profiling module processes information associated with the process name and set of instruction counters to identify a performance problem in the application. | 01-20-2011 |
20110022818 | IOMMU USING TWO-LEVEL ADDRESS TRANSLATION FOR I/O AND COMPUTATION OFFLOAD DEVICES ON A PERIPHERAL INTERCONNECT - An IOMMU for controlling requests by an I/O device to a system memory of a computer system includes control logic and a cache memory. The control logic may translate an address received in a request from the I/O device. If the request includes a transaction layer protocol (TLP) packet with a process address space identifier (PASID) prefix, the control logic may perform a two-level guest translation. Accordingly, the control logic may access a set of guest page tables to translate the address received in the request. A pointer in a last guest page table points to a first table in a set of nested page tables. The control logic may use the pointer in a last guest page table to access the set of nested page tables to obtain a system physical address (SPA) that corresponds to a physical page in the system memory. The cache memory stores completed translations. | 01-27-2011 |
20110029755 | PROCESSOR AND ARITHMATIC OPERATION METHOD - A processor has a first table including an entry that associates a logical address with a physical address of a page that manages a virtual space address. The processor determines, when a target logical address accessed by one of threads is translated to the physical address, whether an entry corresponding to the target logical address is present in the first table, the target logical address is of a page accessed by a program. The processor determines, when the entry corresponding to the target logical address is not present in the first table, whether the target logical address has been accessed during the running of the program. The processor delays, when the target logical address has not yet been accessed, the process of reading the entry corresponding to the target logical address from a page table into the first table by a predetermined time to thereby delay the one thread. | 02-03-2011 |
20110040950 | TRANSLATION LOOK-ASIDE BUFFER - A translation look-aside buffer (TLB) is described. The TLB may include a memory populated with pointers to collections (e.g., tables) of virtual-to-physical address translations. The memory may be populated by, for example, a page fault logic in response to resolving a page fault. The TLB may also include a signal logic to receive a virtual address and to selectively provide either a miss signal or a pointer to a collection of virtual-to-physical translations. The signal may provide the miss signal upon determining that the virtual address is not associated with a stored pointer and may provide a pointer upon determining that the virtual address is associated with the pointer. | 02-17-2011 |
20110072233 | Method for Distributing Data in a Tiered Storage System - This disclosure provides a method for assigning data in a plurality of physical storage resources for an information handling system. The plurality of physical storage resources includes a first tier of physical storage resources and a second tier which has a lower performance and cost relative to capacity than each of the first tier. A tier manager may be hosted on the information handling system and in electronic communication with the plurality physical storage resources. The tier manager may: determine a seek distance value for each page, determine an operation rate for each page, determine an operation size value for each page, determine an elapsed time value for each page; and calculate a relative randomness value for each page using the seek distance value, operation rate, operation size value, and elapsed time value determined for each page. A classification module may assign a physical location for each page such that the relative randomness value for each page in the first tier is greater than the relative randomness value for each page in the second tier. | 03-24-2011 |
20110087857 | AUTOMATIC PAGE PROMOTION AND DEMOTION IN MULTIPLE PAGE SIZE ENVIRONMENTS - Functionality can be implemented in a virtual memory manager (VMM) to allow small pages (e.g., 4 KB) to be coalesced into large pages (e.g., 64 KB), so that a single free list can be maintained for the large pages (“maintained pages”). When a process requests a small page, the VMM can associate a maintained page with a memory segment accessible by the process. Then, the maintained page can be divided to form a set of small pages (“fragments”). The fragments can become available pages in a broken page list. The VMM can satisfy the request by allocating one of the fragments in the broken page list. If the process requests additional small pages, the additional requests can be satisfied from the broken page list. When the process terminates, the fragments in the broken page list become a maintained page and can be returned to the free list. | 04-14-2011 |
20110087858 | Memory management unit - A data processing apparatus is provided comprising a plurality of master devices configured to issue memory access requests including virtual addresses. A memory management unit is configured to receive memory access requests and to translate a virtual address included in a memory access request from a requesting master device into a physical address indicating a storage location in memory. The memory management unit has an internal storage unit having a plurality of entries wherein indications of corresponding virtual address portions and physical address portions are stored. The memory management unit is configured to select an entry of the internal storage unit in dependence on the virtual address and an identifier of the requesting master device. Conflict between the master devices in their usage of the internal storage unit is thus avoided. | 04-14-2011 |
20110113216 | INFORMATION PROCESSING APPARATUS - An information processing apparatus includes: a ROM for storing a program therein; a RAM for temporarily storing therein the program read from the ROM; a program execution unit that is adapted to read and execute the program from the ROM or the RAM; a memory management unit that translates a virtual address output by the program execution unit to a physical address of the ROM or the RAM; a page table storage unit for storing therein a page table which is referred to by the memory management unit, and in which mapping data of a virtual address with a physical address of the ROM or the RAM corresponding to the virtual address is stored; a detection unit that detects change of an event in the information processing apparatus; an operation switching unit that is adapted to instruct, when the detection unit detects the change of the event during a ROM-operation in which the program execution unit reads the program from the ROM, switching from the ROM-operation to a RAM-operation in which the program execution unit reads the program from the RAM; and a page table updating unit that updates the page table which is referred to by the memory management unit, depending on the instruction of the operation switching unit. | 05-12-2011 |
20110119466 | Clearing Selected Storage Translation Buffer Entries Bases On Table Origin Address - An instruction is provided to perform clearing of selected address translation buffer entries (TLB entries) associated with a particular address space, such as segments of storage or regions of storage. The buffer entries related to segment table entries or region table entries or ASCE addresses. The instruction can be implemented by software emulation, hardware, firmware or some combination thereof. | 05-19-2011 |
20110131388 | ACCESSING MULTIPLE PAGE TABLES IN A COMPUTER SYSTEM - A virtual memory system implementing the invention provides concurrent access to translations for virtual addresses from multiple address spaces. One embodiment of the invention is implemented in a virtual computer system, in which a virtual machine monitor supports a virtual machine. In this embodiment, the invention provides concurrent access to translations for virtual addresses from the respective address spaces of both the virtual machine monitor and the virtual machine. Multiple page tables contain the translations for the multiple address spaces. Information about an operating state of the computer system, as well as an address space identifier, are used to determine whether, and under what circumstances, an attempted memory access is permissible. If the attempted memory access is permissible, the address space identifier is also used to determine which of the multiple page tables contains the translation for the attempted memory access. | 06-02-2011 |
20110131389 | METHOD FOR UPDATING DATA IN MEMORIES USING A MEMORY MANAGEMENT UNIT - A method for updating, in the background, data stored in physical memories without affecting the current operations performed by the microprocessor. When the update is completely terminated, the application switches from an old version to a new version. This switching occurs by a reconfiguration of the page table during which a first sub-tree structure of pointers accessing the old version of data stored in memories is replaced by a second sub-tree structure of pointers thus allowing access to the new version of data. This update method prevents incoherent transitory states of the system as the latter works with the previous data version until the installation of the new version becomes usable. In the case of an interruption to the update process, the application can always reinitialize the update since the old version of data can be reactivated by returning to the previous configuration of the page table. | 06-02-2011 |
20110179249 | Data Storage Device and Method for Handling Data Read Out from Memory - The invention provides a method for handling data read out from a memory. In one embodiment, a controller corresponding to the memory comprises a ping-pong buffer. First, a first sector read time period required by the memory to read and output a data sector to the ping-pong buffer is calculated. A second sector read time period required by a host to read a data sector from the ping-pong buffer is calculated. A page switch time period required by the memory to switch a target read page is obtained. A total sector number is determined according to the first sector read time period, the second sector read time period, and the page switch time period. When the memory outputs data to the ping-pong buffer, a first buffer and a second buffer of the ping-pong buffer are switched to receive the data output by the memory according to the total sector number. | 07-21-2011 |
20110185149 | DATA DEDUPLICATION FOR STREAMING SEQUENTIAL DATA STORAGE APPLICATIONS - Data deduplication compression in a streaming storage application, is provided. The disclosed deduplication process provides a deduplication archive that enables storage of the archive to, and extraction from, a streaming storage medium. One implementation involves compressing fully sequential data stored in a data repository to a sequential streaming storage, by: splitting fully sequential data into data blocks; hashing content of each data block and comparing each hash to an in-memory lookup table for a match, the in-memory lookup table storing all hashes that have been encountered during the compression of the fully sequential data; for each data block without a hash match, adding the data block as a new data block for compression of fully sequential data; and encoding duplicate data blocks using the in-memory lookup table into data segments. | 07-28-2011 |
20110191566 | MEMORY CONTROLLER AND MEMORY CONTROL METHOD - According to one embodiment, a memory controller comprises a counter and a setting module. The counter is configured to count the number of valid pages in a block includes a page to be invalidated, when data is written in a nonvolatile memory. The setting module is configured to set the block as an object of compaction when the number of valid pages counted by the counter is smaller than a predetermined number. | 08-04-2011 |
20110219205 | Distributed Data Storage System Providing De-duplication of Data Using Block Identifiers - An access request including a client address for data is received. A metadata server determines a mapping between the client address and storage unit identifiers for the data. Each of the one or more storage unit identifiers uniquely identifies content of a storage unit and the metadata server stores mappings on storage unit identifiers that are referenced by client addresses. The one or more storage unit identifiers are sent to one or more block servers. The one or more block servers service the request using the one or more storage unit identifiers where the one or more block servers store information on where a storage unit is stored on a block server for a storage unit identifier. Also, multiple client addresses associated with a storage unit with a same storage unit identifier are mapped to a single storage unit stored in a storage medium for a block server. | 09-08-2011 |
20110225388 | Data Storage Device And Computing System Including The Same - A data storage device includes a storage medium configured to store data; and a controller configured to control the storage medium, the controller including address mapping information. The controller is configured to divide the address mapping information into at least a first address mapping table and a second address mapping table based on information regarding temporary data received at the controller. The first address mapping table is configured to map one or more addresses of valid data and to be backed up to the storage medium. The second mapping address table being configured to map one or more addresses of the temporary data and to not be backed up to the storage medium. | 09-15-2011 |
20110225389 | Translation table control - Memory address translation circuitry | 09-15-2011 |
20110231629 | DATA PROCESSING APPARATUS AND DATA PROCESSING METHOD - A data processing apparatus includes: a slide storage unit sequentially storing input data; a search unit searching for a data string, stored in the slide storage unit, matched with an input data string including the input data that is continuously input; a length generation unit selecting one from the data string, obtaining a length, and generating a length value; an address value generation unit obtaining a position, in the slide storage unit, of start data in the data string and generating an address value; a translation unit translating a predetermined number of address values among address values having a high appearance frequency among address values generated by the address value generation unit into a translation address value having a value equal to or smaller than a predetermined value according to the appearance frequency of the address value; and an encoding unit encoding the length value and the translation address value. | 09-22-2011 |
20110271075 | SYSTEM ON CHIP INCLUDING UNIFIED INPUT/OUTPUT MEMORY MANAGEMENT UNIT - A system on chip, includes a memory, a bus, a plurality of intellectual property (IP) blocks, and a unified input/output memory management unit (IOMMU) connected between the memory and the bus and configured to determine whether to perform address conversion for a transaction transferred from the bus based on transaction information. | 11-03-2011 |
20110283084 | DATA STORAGE DEVICES HAVING IP CAPABLE PARTITIONS - Apparatuses, methods, and systems related to IP-addressable partitions are disclosed. In some embodiments an IP address is used to uniquely identify a selected subset of partitions. Other embodiments may be described and claimed. | 11-17-2011 |
20110296135 | SYSTEM AND METHOD FOR FREEING MEMORY - There is provided a computer-executed method of freeing memory. One exemplary method comprises receiving a message from a user process. The message may specify a virtual address for a memory segment. The virtual address may be mapped to the memory segment. The memory segment may comprise a physical page. The method may further comprise identifying the physical page based on the virtual address. Additionally, the method may comprise freeing the physical page without unmapping the memory segment. | 12-01-2011 |
20110314250 | STORAGE SYSTEM AND OPERATION METHOD OF STORAGE SYSTEM - The present invention is able to improve the processing performance of a storage system by respectively virtualizing the external volumes and enabling the shared use of such external volumes by a plurality of available virtualization storage devices. By virtualizing and incorporating the external volume of an external storage device, a first virtualization storage device is able to provide the volume to a host as though it is an internal volume. When the load of the first virtualization storage device increases, a second virtualization storage device | 12-22-2011 |
20110320758 | TRANSLATION OF INPUT/OUTPUT ADDRESSES TO MEMORY ADDRESSES - An address provided in a request issued by an adapter is converted to an address directly usable in accessing system memory. The address includes a plurality of bits, in which the plurality of bits includes a first portion of bits and a second portion of bits. The second portion of bits is used to index into one or more levels of address translation tables to perform the conversion, while the first portion of bits are ignored for the conversion. The first portion of bits are used to validate the address. | 12-29-2011 |
20110320759 | MULTIPLE ADDRESS SPACES PER ADAPTER - A plurality of address spaces are assigned to an adapter. To select a particular address space for the adapter, a requestor identifier and address space identifier provided in a request by the adapter are used. Each address space may have a different address translation mechanism associated therewith. | 12-29-2011 |
20110320760 | TRANSLATING REQUESTS BETWEEN FULL SPEED BUS AND SLOWER SPEED DEVICE - Methods and apparatus related to techniques for translating requests between a full speed bus and a slower speed device are described. In one embodiment, a translation logic translates requests between a full speed bus (such as a front side bus, e.g., running relatively higher frequencies, for example at MHz levels) and a much slower speed device (such as a System On Chip (SOC) device (or SOC Device Under Test (DUT)), e.g., logic provided through emulation, which may be running at much lower frequency, for example kHz levels). Other embodiments are also disclosed. | 12-29-2011 |
20110320761 | ADDRESS TRANSLATION, ADDRESS TRANSLATION UNIT DATA PROCESSING PROGRAM, AND COMPUTER PROGRAM PRODUCT FOR ADDRESS TRANSLATION - A lookup operation is performed in a translation look aside buffer based on a first translation request as current translation request, wherein a respective absolute address is returned to a corresponding requestor for the first translation request as translation result in case of a hit. A translation engine is activated to perform at least one translation table fetch in case the current translation request does not hit an entry in the translation look aside buffer, wherein the translation engine is idle waiting for the at least one translation table fetch to return data, reporting the idle state of the translation engine as lookup under miss condition and accepting a currently pending translation request as second translation request, wherein a lookup under miss sequence is performed in the translation look aside buffer based on said second translation request. | 12-29-2011 |
20120005452 | Application Performance Acceleration - Responding to IO requests made by an application to an operating system within a computing device implements IO performance acceleration that interfaces with the logical and physical disk management components of the operating system and within that pathway provides a system memory based disk block cache. The logical disk management component of the operating system identifies logical disk addresses for IO requests sent from the application to the operating system. These addresses are translated to physical disk addresses that correspond to disk blocks available on a physical storage resource. The disk block cache stores cached disk blocks that correspond to the disk blocks available on the physical storage resource, such that IO requests may be fulfilled from the disk block cache. Provision of the disk block cache between the logical and physical disk management components accommodates tailoring of efficiency to any applications making IO requests, and flexible interaction with various different physical disks. | 01-05-2012 |
20120011341 | Load Page Table Entry Address Instruction Execution Based on an Address Translation Format Control Field - What is provided is a load page table entry address function defined for a machine architecture of a computer system. In one embodiment, a machine instruction is obtained which contains an opcode indicating that a load page table entry address function is to be performed. The machine instruction contains an M field, a first field identifying a first general register, and a second field identifying a second general register. Based on the contents of the M field, an initial origin address of a hierarchy of address translation tables having at least one segment table is obtained. Based on the obtained initial origin address, dynamic address translation is performed until a page table entry is obtained. The page table entry address is saved in the identified first general register. | 01-12-2012 |
20120017064 | INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM - An information processing apparatus is disclosed which is connected to a network and which includes: an address translation section configured such that when a virtual address assigned to a virtual storage area is held in an address translation module and associated therein with network node information designating the location of a storage portion connected to the network and with a physical address in the storage portion, the address translation section translates the virtual address into the network node information and the physical address based on the address translation module; and an access communication section configured such that based on the network node information and the physical address acquired by the address translation section, the access communication section accesses one of a plurality of storage areas held by the storage portion connected to the network, the accessed storage area being designated by the physical address. | 01-19-2012 |
20120023307 | METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR EXCLUDING AN ADDRESSABLE ENTITY FROM A TRANSLATION OF SOURCE CODE - Methods and systems are described for excluding an addressable entity from a translation of source code. A first translation is received that is translated from source code including a first addressable entity specified in a programming language and that includes a first translation of the first addressable entity. Excluding information is received that identifies the first translation of the first addressable entity as excludable from a second translation, of the source code, translated from the first translation. Translation configuration information is received for translating the first translation. In response to the translation configuration information being received, the first translation is translated into the second translation excluding, based on the excluding information, the first addressable entity. | 01-26-2012 |
20120047348 | VIRTUALIZATION WITH FORTUITOUSLY SIZED SHADOW PAGE TABLES - One or more embodiments provides a shadow page table used by a virtualization software wherein at least a portion of the shadow page table shares computer memory with a guest page table used by a guest operating system (OS) and wherein the virtualization software provides a mapping of guest OS physical pages to machine pages. | 02-23-2012 |
20120066473 | Memory Architecture with Policy Based Data Storage - A computing system and methods for memory management are presented. A memory or an I/O controller receives a write request where the data two be written is associated with an address. Hint information may be associated with the address and may relate to memory characteristics such as an historical, O/S direction, data priority, job priority, job importance, job category, memory type, I/O sender ID, latency, power, write cost, or read cost components. The memory controller may interrogate the hint information to determine where (e.g., what memory type or class) to store the associated data. Data is therefore efficiently stored within the system. The hint information may also be used to track post-write information and may be interrogated to determine if a data migration should occur and to which new memory type or class the data should be moved. | 03-15-2012 |
20120079231 | DATA WRITING METHOD, MEMORY CONTROLLER, AND MEMORY STORAGE APPARATUS - A data writing method and a memory controller and a memory storage apparatus using the same are provided. The data writing method includes grouping a plurality of physical blocks into a plurality of physical units, grouping the physical units into at least a data area and a free area, and configuring a plurality of logical units for mapping to the physical units of the data area. The data writing method also includes getting a physical unit from the free area, writing data in at least one of the logical units into the gotten physical unit, and writing an end mark into the gotten physical unit, and in the gotten physical unit, the end mark follows the data belonging to the at least one logical unit. Thereby, the storage space of each physical unit can be effectively used, and the lifespan of the memory storage apparatus can be prolonged. | 03-29-2012 |
20120089808 | MULTIPROCESSOR USING A SHARED VIRTUAL MEMORY AND METHOD OF GENERATING A TRANSLATION TABLE - A multiprocessor using a shared virtual memory (SVM) is provided. The multiprocessor includes a plurality of processing cores and a memory manager configured to transform a virtual address into a physical address to allow a processing core to access a memory region corresponding to the physical address. | 04-12-2012 |
20120089809 | ACCESSING AN ENCODED DATA SLICE UTILIZING A MEMORY BIN - A method begins by a processing module receiving an encoded data slice to store and determining a slice length of the encoded data slice. The method continues with the processing module comparing the slice length to a plurality of bin widths, wherein each of the plurality of bin widths represents a fixed storage width of a plurality of memory bins within each of a plurality of memory containers, wherein a storage unit includes the plurality of memory containers. The method continues with the processing module selecting one of the plurality of memory containers based on the comparing to produce a selected memory container, identifying an available bin of the plurality of bins of the selected memory container, and storing the encoded data slice in the available bin. | 04-12-2012 |
20120102295 | DATA COMPRESSION AND ENCODING IN A MEMORY SYSTEM - Embodiments provide a method comprising receiving input data comprising a plurality of data sectors; compressing the plurality of data sectors to generate a corresponding plurality of compressed data sectors; splitting a compressed data sector of the plurality of compressed data sectors to generate a plurality of split compressed data sectors; and storing the plurality of compressed data sectors, including the plurality of split compressed data sectors, in a plurality of memory pages of a memory. | 04-26-2012 |
20120110297 | SECURE PARTITIONING WITH SHARED INPUT/OUTPUT - A soft partitioning system for allowing multiple virtual system environments to execute on a single platform may include I/O service partitions (IOSPs). The IOSPs operating in a separate virtual memory space on the platform and service disk and network requests from multiple guests. The IOSPs provide translation from virtual addresses to physical addresses such that from the point of view of the guest the virtual addresses used by the guest appear to be physical addresses. The IOSP may be implemented in a Linux kernel. The address space of the IOSP may be extended to include DMA memory sections such that the Linux kernel does not include all of the guest's memory. The IOSP may operate on hardware that does or does not support virtualization technology for directed I/O. | 05-03-2012 |
20120110298 | MEMORY ACCESS CONTROL DEVICE AND COMPUTER - To virtualize a system without having to incorporate a special mechanism into software and with increases in overhead suppressed, by controlling memory accesses made by processors using hardware. | 05-03-2012 |
20120110299 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 05-03-2012 |
20120117355 | Memory Management for a Dynamic Binary Translator - A dynamic binary translator apparatus, method and program for translating a first block of binary computer code intended for execution in a subject execution environment having a first memory of one page size into a second block for execution in a second execution environment having a second memory of another page size, comprising a redirection page mapper responsive to a page characteristic of the first memory for mapping an address of the first memory to an address of the second memory; a memory fault behaviour detector operable to detect memory faulting during execution of the second block and to accumulate a fault count to a trigger threshold; and a regeneration component responsive to the fault count reaching the trigger threshold to discard the second block and cause the first block to be retranslated with its memory references remapped by a page table walk. | 05-10-2012 |
20120124324 | METHOD AND APPARATUS FOR TRANSLATING MEMORY ACCESS ADDRESS - A memory access address translating apparatus and method may each classify pixels included in an input image into a plurality of tiles, and may generate a new memory for each of the successive tiles to enable the successive tiles, among a plurality of tiles, to be stored in different banks. | 05-17-2012 |
20120144152 | TRANSACTION LOG RECOVERY - The present disclosure includes methods for transaction log recovery in memory. One such method includes examining a number of entries saved in a transaction log to determine a write pattern, reading the memory based on the write pattern, updating the transaction log with information associated with data read from the memory based on the write pattern, and updating a logical address (LA) table using the transaction log. | 06-07-2012 |
20120151178 | ADDRESS TRANSLATION TABLE TO ENABLE ACCESS TO VIRTUAL FUNCTIONS - In response to detecting a PCI host bridge (PHB), a first address translation table may be allocated in a first portion of a memory. The first address translation table may be associated with the PHB. If an input/output adapter accessible to the PHB is configured as a virtualized adapter, a first table manager may be assigned to manage the first address translation table. The first address translation table may be configured for an initial number of virtual functions. If a requested number of virtual functions is greater than the initial number of virtual functions, additional virtual functions may be configured. A second address translation table may be allocated in a second portion of the memory. The second portion of the memory may be non-contiguous with reference to the first portion of the memory. Entries may be created in the second address translation table for the additional virtual functions. | 06-14-2012 |
20120151179 | MEMORY STACKS MANAGEMENT - A method for managing a memory stack provides mapping a part of the memory stack to a span of fast memory and a part of the memory stack to a span of slow memory, wherein the fast memory provides access speed substantially higher than the access speed provided by the slow memory. | 06-14-2012 |
20120159116 | APPARATUS FOR PROCESSING REMOTE PAGE FAULT AND METHOD THEREOF - Disclosed is an apparatus for processing a remote page fault included in an optional local node within a cluster system configuring a large integration memory (CVM) by integrating individual memories of a plurality of nodes. The apparatus includes a memory including a CVM-map, a node memory information table, a virtual memory area, and a CVM page table, and a main controller mapping the large integration memory to an address space of a process when a user process requests memory allocation. | 06-21-2012 |
20120166757 | RETRIEVING DATA SEGMENTS FROM A DISPERSED STORAGE NETWORK - A method begins by a processing module receiving a file retrieval request for a file, wherein the file includes one or more data regions, and wherein a data region of the one or more data regions is divided into a plurality of data segments and stored as a plurality of sets of encoded data slices in a dispersed storage network (DSN) memory. The method continues with the processing module retrieving a segment allocation table (SAT), wherein a SAT entry of a plurality of SAT entries includes information regarding storing the data region in the DSN memory and a segmentation scheme regarding the dividing of the data region into the plurality of data segments. The method continues with the processing module identifying the plurality of sets of encoded data slices and retrieving at least a sufficient number of the plurality of sets of encoded data slices to regenerate the data region. | 06-28-2012 |
20120166758 | Executing a Perform Frame Management Instruction - What is disclosed is a frame management function defined for a machine architecture of a computer system. In one embodiment, a frame management instruction is obtained which identifies a first and second general register. The first general register contains a frame management field having a key field with access-protection bits and a block-size indication. If the block-size indication indicates a large block then an operand address of a large block of data is obtained from the second general register. The large block of data has a plurality of small blocks each of which is associated with a corresponding storage key having a plurality of storage key access-protection bits. If the block size indication indicates a large block, the storage key access-protection bits of each corresponding storage key of each small block within the large block is set with the access-protection bits of the key field. | 06-28-2012 |
20120166759 | ROBUST INDEX STORAGE FOR NON-VOLATILE MEMORY - A non-volatile memory data address translation scheme is described that utilizes a hierarchal address translation system that is stored in the non-volatile memory itself. Embodiments of the present invention utilize a hierarchal address data and translation system wherein the address translation data entries are stored in one or more data structures/tables in the hierarchy, one or more of which can be updated in-place multiple times without having to overwrite data. This hierarchal address translation data structure and multiple update of data entries in the individual tables/data structures allow the hierarchal address translation data structure to be efficiently stored in a non-volatile memory array without markedly inducing write fatigue or adversely affecting the lifetime of the part. The hierarchal address translation of embodiments of the present invention also allow for an address translation layer that does not have to be resident in system RAM for operation. | 06-28-2012 |
20120198205 | TRANSACTIONAL MEMORY - Subject matter disclosed herein relates to techniques to perform transactions using a memory device. | 08-02-2012 |
20120198206 | APPARATUS AND METHOD FOR PROTECTING MEMORY IN MULTI-PROCESSOR SYSTEM - Memory mapping in small units using a segment and subsegments is described, and thus it is possible to control a memory access even using a small amount of hardware, and it is possible to reduce costs incurred by hardware. Additionally, it is possible to prevent a memory from being destroyed due to a task error in the multi-processor system. | 08-02-2012 |
20120210094 | Data Communications In A Parallel Active Messaging Interface Of A Parallel Computer - Eager send data communications in a parallel active messaging interface (PAMI) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address. | 08-16-2012 |
20120210095 | APPARATUS, SYSTEM, AND METHOD FOR APPLICATION DIRECT VIRTUAL MEMORY MANAGEMENT - An apparatus, system, and method for application direct virtual memory management. The method includes detecting a system memory access to a virtual memory address within a monitored page of data not loaded in main memory of a computing system. The method includes determining a first swap address for a loaded page of data in the main memory. The first swap address is defined in a sparse virtual address space exposed by a persistent storage device. The first swap address is associated in an index with a first deterministic storage location. The index is managed by the persistent storage device. The method includes storing the loaded page on a persistent storage device at the first deterministic storage location. The method includes moving the monitored page from a second deterministic storage location to the main memory. The second deterministic storage location is associated with a second swap address in the index. | 08-16-2012 |
20120216009 | SOURCE-TARGET RELATIONS MAPPING - A data preservation function is provided which, in one embodiment, includes mapping in a plurality of maps for a target storage device, map extent ranges of each map, to corresponding target extent ranges of storage locations on the target storage device. Usage of a particular map extent range by a relationship between a source extent range of storage locations on a source storage device containing data to be preserved in the source extent range, and the target extent range mapped to the map particular extent range, may be indicated by the map. In another aspect, in response to receipt of a data preservation command, a data preservation operation is performed including determining whether a map indicates availability of a map extent range mapped to the identified target extent range. Upon determining that a particular map indicates availability of a map extent range mapped to the identified target extent range, a relationship between the identified source extent range and the identified target extent range is established. In yet another aspect, upon determining that no map indicates availability of a map extent range mapped to the identified target extent range, establishing of a relationship between the identified source extent range and the identified target extent range may be delayed until it is determined that a particular map indicates availability of a map extent range mapped to the identified target extent range. Other features and aspects may be realized, depending upon the particular application. | 08-23-2012 |
20120221828 | RETRIEVING DATA IN A STORAGE SYSTEM USING THIN PROVISIONING - The invention relates to retrieving data from a storage system. One embodiment of the invention comprises receiving a write operation, establishing a correspondence relationship between a logic block address and a physical block address of the write operation, and determining whether a valid data percentage in a mapping table is greater than a predetermined threshold after the correspondence relationship is added in stored metadata. In response to the valid data percentage being less than the predetermined threshold, the embodiment adds the correspondence relationship to a B-tree data structure of stored metadata. | 08-30-2012 |
20120233438 | PAGEFILE RESERVATIONS - A system and method for maintaining a pagefile of a computer system using a technique of reserving portions of the pagefile for related memory pages. Pages near one another in a virtual memory space often store related information and it is therefore beneficial to ensure that they are stored near each other in the pagefile. This increases the speed of reading data out of the pagefile because total seek time of a disk drive that stores the pagefile may decrease when adjacent pages in a virtual memory address space are read back from the disk drive. By implementing a reservation system that allows related pages to be stored adjacent to one another, the efficiency of memory management to of the computer system is increased. | 09-13-2012 |
20120233439 | Implementing TLB Synchronization for Systems with Shared Virtual Memory Between Processing Devices - Page faults arising in a graphics processing unit may be handled by an operating system running on the central processing unit. In some embodiments, this means that unpinned memory can be used for the graphics processing unit. Using unpinned memory in the graphics processing unit may expand the capabilities of the graphics processing unit in some cases. | 09-13-2012 |
20120254582 | TECHNIQUES AND MECHANISMS FOR LIVE MIGRATION OF PAGES PINNED FOR DMA - Techniques for migrating data from a first range of physical memory locations to a second range of physical memory locations. The second range of physical memory locations is allocated for migration of data from the first range of physical memory locations Pending transactions for the first range of physical memory locations are flushed. One or more address translation entries are reprogrammed. Data is migrated from the first range of physical memory locations to the second range of physical memory locations. Subsequent memory transactions are processed to cause the transactions to be directed to the second range of physical memory locations. | 10-04-2012 |
20120254583 | STORAGE CONTROL SYSTEM PROVIDING VIRTUAL LOGICAL VOLUMES COMPLYING WITH THIN PROVISIONING - A storage control system comprises a storage resource and a controller. If the controller receives a write command which specifies an address belonging to a virtual volume, to the area belonging to the address, unassigned pages in a pool which is a storage area based on a plurality of media types of physical storage media and which is divided into a plurality of pages are supposed to be assigned. The storage control system comprises a plurality of management entries which are a plurality of entries for page management information which is the information related to the pages. The plurality of management entries include two or more page entries and two or more statistical entries. A statistical entry about a certain media type corresponds to M pages based on the storage media of the certain media type (M is an integral number which is 2 or larger). The statistical entry corresponding to the M pages comprises the information related to the I/O performance value for those M pages. | 10-04-2012 |
20120260059 | STATE TRANSITION MANAGEMENT DEVICE AND STATE TRANSITION MANAGEMENT METHOD THEREOF - A state transition management device includes a first terminal receiving a first signal based on a current state-number, a memory which stores a state transition rule and from which a plurality of subsequent state-number candidates are read out in accordance with the first signal, a plurality of first nodes revealing the plurality of subsequent state-number candidates, a second terminal receiving a second signal based on the current state-number, a selection method specifying unit which outputs a selection method specifying signal in accordance with the second signal, a second node revealing the selection method specifying signal, a event terminal receiving a event-signal based on an event, a third terminal receiving a third signal based on the current state-number, a selection circuit which selects a subsequent state-number from the plurality of subsequent number candidates in accordance with the event-signal and the third signal. | 10-11-2012 |
20120272038 | LOGICAL BLOCK ADDRESS MAPPING - A mapping table is modified to match one or more specified storage conditions of data stored in or expected to be stored in one or more logical block address ranges to physical addresses within a storage drive having performance characteristics that satisfy the specified storage conditions. For example, the performance characteristics may be a reliability of the physical location within the storage drive or a data throughput range of read/write operations. Existing data is moved and/or new data is written to physical addresses on the storage media possessing the performance characteristic(s), according to the mapping table. Further, a standard seeding or a seeding override for the re-mapped logical block addresses can prevent read operations from inadvertently reading incorrect physical addresses corresponding to the re-mapped logical block addresses. | 10-25-2012 |
20120278588 | HARDWARE ASSISTANCE FOR PAGE TABLE COHERENCE WITH GUEST PAGE MAPPINGS - Some embodiments of the present invention include a memory management unit (MMU) configured to, in response to a write access targeting a guest page mapping of a guest virtual page number (GVPN) to a guest physical page number (GPPN) within a guest page table, identify a first page mapping that associates the GVPN with a physical page number (PPN). The MMU is also configured to determine whether a traced write indication is associated with the first page mapping and, if so, record update information identifying the targeted guest page mapping. The update information is used to reestablish coherence between the guest page mapping and the first page mapping. The MMU is further configured to perform the write access. | 11-01-2012 |
20120284486 | CONTROL OF ON-DIE SYSTEM FABRIC BLOCKS - Methods and apparatus for control of On-Die System Fabric (OSF) blocks are described. In one embodiment, a shadow address corresponding to a physical address may be stored in response to a user-level request and a logic circuitry (e.g., present in an OSF) may determine the physical address from the shadow address. Other embodiments are also disclosed. | 11-08-2012 |
20120290813 | MEMORY SYSTEM AND OPERATING METHOD THEREOF - A memory system includes a memory including a page table, and an input/output memory management unit (I/O MMU) connected to the memory, and configured to receive a virtual address from an I/O Device and to search within the I/O MMU for a plurality of entries matching the virtual address. If no entries matching the virtual address are found within the I/O MMU as a result of searching for the entries, the I/O MMU accesses the memory, searches the page table for the entries matching the virtual address, and stores the entries within the I/O MMU. | 11-15-2012 |
20120331260 | IIMPLEMENTING DMA MIGRATION OF LARGE SYSTEM MEMORY AREAS - A method, system and computer program product are provided for implementing memory migration of large system memory pages in a computer system. A large page to be migrated from a current location to a target location is converted into a plurality of smaller subpages for a processor or system page table. The migrated page is divided into first, second and third segments, each segment composed of the smaller subpages and each respective segment changes as each individual subpage is migrated. CPU and I/O accesses to respective subpages of the first segment are directed to corresponding subpages of the target page or new page. I/O accesses to respective subpages of the second segment use a dual write mode targeting corresponding subpages of both the current page and the target page. CPU and I/O accesses to the subpages of the third segment access the corresponding subpages of the current page. | 12-27-2012 |
20120331261 | Point-in-Time Copying of Virtual Storage - A method includes making in a real storage, a copy of a first page content stored in a first page data structure by creating a second page content in a second data structure, the second page content pointing to actual data pointed to by the first page content, storing the second page content in the second data structure, marking the first page content in the first page data structure with a page protection bit, wherein the page protection bit prevents a modification of the virtual page, in response to an attempt to modify the virtual page, copying the virtual page in the event the first page content in the first page data structure is marked with the page protection bit, storing the copied virtual page in a second virtual storage, and altering the second page content in the second data structure to point to the stored virtual page. | 12-27-2012 |
20120331262 | PERFORMING MEMORY ACCESSES WHILE OMITTING UNNECESSARY ADDRESS TRANSLATIONS - In computing environments that use virtual addresses (or other indirectly usable addresses) to access memory, the virtual addresses are translated to absolute addresses (or other directly usable addresses) prior to accessing memory. To facilitate memory access, however, address translation is omitted in certain circumstances, including when the data to be accessed is within the same unit of memory as the instruction accessing the data. In this case, the absolute address of the data is derived from the absolute address of the instruction, thus avoiding address translation for the data. Further, in some circumstances, access checking for the data is also omitted. | 12-27-2012 |
20120331263 | METHOD FOR MANAGING A MEMORY APPARATUS - A method for managing a memory apparatus including at least one non-volatile (NV) memory element includes: building at least one local page address linking table containing a page address linking relationship between a plurality of physical page addresses and at least a logical page address, wherein the local page address linking table includes a first local page address linking table containing a first page address linking relationship of a plurality of first physical pages, and a second local page address linking table containing a second page address linking relationship of a plurality of second physical pages that are different from the first physical pages; building a global page address linking table according to the local page address linking table; and accessing the memory apparatus according to the global page address linking table. | 12-27-2012 |
20130007405 | TRANSLATION CACHE PREDICTION - Techniques for client side translation cache prediction are provided. The techniques include obtaining meta data associated with a request, applying a cache prediction model to the meta data to automatically predict one or more translations associated with the request, and storing the one or more translations in a client translation cache. | 01-03-2013 |
20130013888 | Method and Appartus For Index-Based Virtual Addressing - An apparatus comprising a memory configured to store a routing table and a processor coupled to the memory, the processor configured to generate a request to access at least a section of an instance, assign an index to the request based on the instance, lookup an entry in the routing table based on the index, wherein the entry comprises a resource bit vector, and identify a resource comprising at least part of the section of the instance based on the resource bit vector. | 01-10-2013 |
20130019080 | DYNAMIC SIZING OF TRANSLATION LOOKASIDE BUFFER FOR POWER REDUCTIONAANM Levinsky; Gideon N.AACI Cedar ParkAAST TXAACO USAAGP Levinsky; Gideon N. Cedar Park TX USAANM Shah; Manish K.AACI AustinAAST TXAACO USAAGP Shah; Manish K. Austin TX US - Methods and mechanisms for operating a translation lookaside buffer (TLB). A translation lookaside buffer (TLB) includes a plurality of segments, each segment including one or more entries. A control unit is coupled to the TLB. The control unit is configured to determine utilization of segments, and dynamically disable segments in response to determining that segments are under-utilized. The control unit is also configured to dynamically enable segments responsive to determining a given number of segments are over-utilized. | 01-17-2013 |
20130024645 | STRUCTURED MEMORY COPROCESSOR - Intercepting a requested memory operation corresponding to a conventional memory is disclosed. The requested memory operation is translated to be applied to a structured memory. | 01-24-2013 |
20130024646 | Method and Simulator for Simulating Multiprocessor Architecture Remote Memory Access - A method for simulating remote memory access in a target machine on a host machine is disclosed. Multiple virtual memory spaces in the host machine are divided and a virtual address space of each target application process is set to one virtual memory space that corresponds to a target application process and is in the multiple virtual memory spaces. Access of the target application process is captured to a virtual memory space other than the virtual memory space corresponding to the target application process in the multiple virtual memory spaces. | 01-24-2013 |
20130031329 | INTEGRATED CIRCUIT AND SEMICONDUCTOR MEMORY DEVICE USING THE SAME - An integrated circuit includes a random address generation unit configured to generate a first random address for a data randomizing operation, an address conversion unit configured to convert the first random address and generate a second random address, and a synchronization output unit configured to sequentially output the first and second random addresses in synchronization with a clock signal. | 01-31-2013 |
20130031330 | ARRANGEMENT AND METHOD - A first arrangement including a first interface configured to receive a memory transaction having an address from a second arrangement; a second interface; an address translator configured to determine based on said address if said transaction is for said first arrangement and if so to translate said address or if said transaction is for a third arrangement to forward said transaction without modification to said address to said second interface, said second interface being configured to transmit said transaction, without modification to said address, to said third arrangement. | 01-31-2013 |
20130031331 | HIERARCHICAL IMMUTABLE CONTENT-ADDRESSABLE MEMORY COPROCESSOR - Intercepting a requested memory operation corresponding to a conventional memory is disclosed. The requested memory operation is translated to be applied to a structured memory. | 01-31-2013 |
20130046953 | System And Method For Storing Data In A Virtualized High Speed Memory System With An Integrated Memory Mapping Table - A system and method for providing high-speed memory operations is disclosed. The technique uses virtualization of memory space to map a virtual address space to a larger physical address space wherein no memory bank conflicts will occur. The larger physical address space is used to prevent memory bank conflicts from occurring by moving the virtualized memory addresses of data being written to memory to a different location in physical memory that will eliminate a memory bank conflict. A changeable mapping table that maps the virtualized memory addresses to physical memory addresses is stored in the same memory system. | 02-21-2013 |
20130067193 | NETWORK INTERFACE CONTROLLER WITH FLEXIBLE MEMORY HANDLING - An input/output (I/O) device includes a host interface for connection to a host device having a memory, and a network interface, which is configured to transmit and receive, over a network, data packets associated with I/O operations directed to specified virtual addresses in the memory. Processing circuitry is configured to translate the virtual addresses into physical addresses using memory keys provided in conjunction with the I/O operations and to perform the I/O operations by accessing the physical addresses in the memory. At least one of the memory keys is an indirect memory key, which points to multiple direct memory keys, corresponding to multiple respective ranges of the virtual addresses, such that an I/O operation referencing the indirect memory key can cause the processing circuitry to access the memory in at least two of the multiple respective ranges. | 03-14-2013 |
20130067194 | TRANSLATION OF INPUT/OUTPUT ADDRESSES TO MEMORY ADDRESSES - An address provided in a request issued by an adapter is converted to an address directly usable in accessing system memory. The address includes a plurality of bits, in which the plurality of bits includes a first portion of bits and a second portion of bits. The second portion of bits is used to index into one or more levels of address translation tables to perform the conversion, while the first portion of bits are ignored for the conversion. The first portion of bits are used to validate the address. | 03-14-2013 |
20130080731 | METHOD AND APPARATUS FOR PERFORMING MEMORY MANAGEMENT - A method for performing display control is provided, where the method is applied to an electronic device. The method includes: managing a plurality of physical blocks of at least one non-volatile (NV) memory according to a block address translation rule, the block address translation rule of both of one-to-multiple block address translation and multiple-to-one block address translation; and when it is detected that erasing a specific logical block represented by a specific block logical address is required, determining a set of block physical addresses corresponding to the specific block logical address according to the block address translation rule and erasing a set of physical blocks represented by the set of block physical addresses within the plurality of physical blocks. An associated apparatus is also provided. | 03-28-2013 |
20130080732 | APPARATUS, SYSTEM, AND METHOD FOR AN ADDRESS TRANSLATION LAYER - An apparatus, system, and method are disclosed for storage address translation. The method includes storing, in volatile memory, a plurality of logical-to-physical mapping entries for a non-volatile recording device. The method includes persisting a logical-to-physical mapping entry from the volatile memory to recording media of the non-volatile recording device. The logical-to-physical mapping entry may be selected for persisting based on a mapping policy indicated by a client. The method includes loading the logical-to-physical mapping entry from the recording media of the non-volatile recording device into the volatile memory in response to a storage request associated with the logical-to-physical mapping entry. | 03-28-2013 |
20130086353 | VARIABLE LENGTH ENCODING IN A STORAGE SYSTEM - A system and method for maintaining a mapping table in a data storage subsystem. A data storage subsystem supports multiple mapping tables including a plurality of entries. Each of the entries comprise a tuple including a key. A data storage controller is configured to encode each tuple in the mapping table using a variable length encoding. Additionally, the mapping table may be organized as a plurality of time ordered levels, with each level including one or more mapping table entries. Further, a particular encoding of a plurality of encodings for a given tuple may be selected based at least in part on a size of the given tuple as unencoded, a size of the given tuple as encoded, and a time to encode the given tuple. | 04-04-2013 |
20130097403 | Address Mapping in Memory Systems - A memory system includes an address mapping circuit. The address mapping circuit receives an input memory address having a first set of address bits. The address mapping circuit applies a logic function to the input memory address to generate a mapped memory address. The logic function uses at least a subset of the first set of address bits in two separate operations that respectively determine two portions of the mapped memory address. | 04-18-2013 |
20130097404 | DATA COMMUNICATIONS IN A PARALLEL ACTIVE MESSAGING INTERFACE OF A PARALLEL COMPUTER - Eager send data communications in a parallel active messaging interface (‘PAMI’) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address. | 04-18-2013 |
20130103922 | METHOD, COMPUTER PROGRAM PRODUCT AND APPARTUS FOR ACCELERATING RESPONSES TO REQUESTS FOR TRANSACTIONS INVOLVING DATA OPERATIONS - Responding to IO requests made by an application to an operating system within a computing device implements IO performance acceleration that interfaces with the logical and physical disk management components of the operating system and within that pathway provides a system memory based disk block cache. The logical disk management component of the operating system identifies logical disk addresses for IO requests sent from the application to the operating system. These addresses are translated to physical disk addresses that correspond to disk blocks available on a physical storage resource. The disk block cache stores cached disk blocks that correspond to the disk blocks available on the physical storage resource, such that IO requests may be fulfilled from the disk block cache. Provision of the disk block cache between the logical and physical disk management components accommodates tailoring of efficiency to any applications making IO requests, and flexible interaction with various different physical disks. | 04-25-2013 |
20130111183 | ADDRESS TRANSLATION APPARATUS, ADDRESS TRANSLATION METHOD, AND CALCULATION APPARATUS | 05-02-2013 |
20130117530 | APPARATUS FOR TRANSLATING VIRTUAL ADDRESS SPACE - The apparatus includes a virtual address space generation unit generating a virtual address space of a guest operating system, the guest operating system being executed in the virtual address space, and a virtual address space of a virtual machine monitor, the virtual machine monitor being executed in the virtual address space; a gateway page generation unit generating a gateway page allocated to a predetermined region of an actual memory region and mapped to the virtual address space of the guest operating system and the virtual address space of the guest machine monitor; and a memory management unit executing the gateway page to map a kernel region of the guest operating system to the predetermined region of the virtual address space of the virtual machine monitor to perform translation between the virtual address space of the guest operating system and the virtual address space of the virtual machine monitor. | 05-09-2013 |
20130124821 | METHOD OF MANAGING COMPUTER MEMORY, CORRESPONDING COMPUTER PROGRAM PRODUCT, AND DATA STORAGE DEVICE THEREFOR - The invention concerns a method of managing computer memory, the method comprising the steps of maintaining ( | 05-16-2013 |
20130132703 | APPARATUSES AND METHODS FOR STORING VALIDITY MASKS AND OPERATING APPARATUSES - Apparatuses and methods for storing a validity mask and operating apparatuses are described. A number of methods for operating an apparatus include storing a validity mask that is associated with a number of pages of memory cells in a group of pages and that provides validity information for the number of pages of memory cells in the group of pages. | 05-23-2013 |
20130132704 | MEMORY CONTROLLER AND METHOD FOR TUNED ADDRESS MAPPING - A memory system maps physical addresses to device addresses in a way that reduces power consumption. The system includes circuitry for deriving efficiency measures for memory usage and selects from among various address-mapping schemes to improve efficiency. The address-mapping schemes can be tailored for a given memory configuration or a specific mixture of active applications or application threads. Schemes tailored for a given mixture of applications or application threads can be applied each time the given mixture is executing, and can be updated for further optimization. Some embodiments mimic the presence of an interfering thread to spread memory addresses across available banks, and thereby reduce the likelihood of interference by later- introduced threads. | 05-23-2013 |
20130159662 | Working Set Swapping Using a Sequentially Ordered Swap File - Techniques described enable efficient swapping of memory pages to and from a working set of pages for a process through the use of large writes and reads of pages to and from sequentially ordered locations in secondary storage. When writing pages from a working set of a process into secondary storage, the pages may be written into reserved, contiguous locations in a dedicated swap file according to a virtual address order or other order. Such writing into sequentially ordered locations enables reading in of clusters of pages in large, sequential blocks of memory, providing for more efficient read operations to return pages to physical memory. | 06-20-2013 |
20130159663 | MEMORY MANAGEMENT UNIT FOR A MICROPROCESSOR SYSTEM, MICROPROCESSOR SYSTEM AND METHOD FOR MANAGING MEMORY - The invention pertains to a memory management unit for a microprocessor system, the memory management unit being connected or connectable to at least one processor core of the microprocessor system and being connected or connectable to a physical memory of the microprocessor system. The memory management unit is adapted to selectively operate in a hypervisor mode or in a supervisor mode, the hypervisor mode and the supervisor mode having different privilege levels of access to hardware The memory management unit comprises a first register table indicating physical address information for mapping at least one logical physical address and at least one actual physical address onto each other; a second register table indicating an allowed address range of physical addresses accessible to a process running in or under supervisor mode; wherein the memory management unit is adapted to prevent write access to the second register table by a process not in hypervisor mode. The memory management unit is further adapted to allow write access to the first register table of a process running in or under supervisor mode to reconfigure the physical address information indicated in the first register table with memory mapping information relating to at least one physical address, if the at least one physical address is in the allowed address range, and to prevent write access to the first register table of the process running in or under supervisor mode if the at least one physical address is not in the allowed address range. The invention also pertains to a microprocessor system and a method for managing memory. | 06-20-2013 |
20130191610 | DATA STAGING AREA - An illustrative embodiment of a computer-implemented process for managing a staging area creates the staging area for identified candidate cold objects, moves the identified candidate objects into the staging area, tracks application access to memory comprising the staging area and determines whether frequency of use information for a specific object exceeds a predetermined threshold. Responsive to a determination that the frequency of use information for the specific object exceeds a predetermined threshold, move the specific object into a regular area and determine whether a current time exceeds a predetermined threshold. Responsive to a determination that the current time exceeds a predetermined threshold, the computer-implemented process moves remaining objects from the staging area to a cold area. | 07-25-2013 |
20130191611 | SUBSTITUTE VIRTUALIZED-MEMORY PAGE TABLES - Embodiments of techniques and systems for using substitute virtualized-memory page tables are described. In embodiments, a virtual machine monitor (VMM) may determine that a virtualized memory access to be performed by an instruction executing on a guest software virtual machine is not allowed in accordance with a current virtualized-memory page table (VMPT). The VMM may select a substitute VMPT that permits the virtualized memory access, In scenarios where a data access length for the instruction is known, the substitute VMPT may include full execute, read, and write permissions for the entire guest software address space. In scenarios where a data access length for the instruction is not known, the substitute VMPT may include less than full execute, read, and write permissions for the entire guest software address space, and may be modified to allow the requested virtualized memory access. Other embodiments may be described and claimed. | 07-25-2013 |
20130198486 | SEMICONDUCTOR DEVICE - A semiconductor device according to the present invention includes a first address generation unit that includes a first register group and generates a table address by a cyclically repeating first pattern using a value stored to the first register group, a second address generation unit that includes a second register group and generates an access address by a cyclically repeating second pattern using a value stored to the second register group and parameter information determined by the table address, and a control unit that outputs setting information to be supplied to the first register group and the second register group. Further, the semiconductor device performs at least one of a read process and a write process of data from and to a data memory using the access address. | 08-01-2013 |
20130212351 | TRANSLATION MAP SIMPLIFICATION - A method for translation map simplification may include determining a translation map based on a predetermined criterion in response to receiving input data. The method may also include determining if the translation map extends another map or a referenced map and determining if the translation map includes at least one map fragment. The referenced map is loaded in response to a determination that the translation map includes an extension of the referenced map. The map fragment is loaded in response to a determination that the translation map comprises the map fragment. A new map is compiled based on at least the translation map, the referenced map and the at least one map fragment, in response to the translation map not including a new map reference or a modification to the translation map. The input data is processed based on the new map to produce translated data specific to the new map. | 08-15-2013 |
20130227246 | MANAGEMENT INFORMATION GENERATING METHOD, LOGICAL BLOCK CONSTRUCTING METHOD, AND SEMICONDUCTOR MEMORY DEVICE - A management information generating method wherein logical and physical block addresses (BAs) of continuous addresses are associated with each other in the BA translation table. When a logical block is constructed, an allowable value is set for the number of defective physical blocks. A logical block having fewer defects than the set number is set usable, and a logical block having more defects than the set number is set unusable. System logical block construction is performed to preferentially select physical blocks from a plane list including a large number of usable blocks to equalize the number of usable blocks in each plane list. It is determined whether the number of free blocks is insufficient on the basis of a first management unit and whether the storage area for the indicated capacity can be reserved on the basis of the management unit different from the first unit. | 08-29-2013 |
20130227247 | MEMORY ADDRESS TRANSLATION - The present disclosure includes devices, systems, and methods for memory address translation. One or more embodiments include a memory array and a controller coupled to the array. The array includes a first table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a data segment stored in the array and a logical address. The controller includes a second table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a record in the first table and a logical address. The controller also includes a third table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a record in the second table and a logical address. | 08-29-2013 |
20130246734 | Adaptive Address Mapping with Dynamic Runtime Memory Mapping Selection - A system monitors and dynamically changes memory mapping in a runtime of a computing system. The computing system has various memory resources, and multiple possible mappings that indicate how data is to be stored in and subsequently accessed from the memory resources. The performance of each memory mapping may be different under different runtime or load conditions of the computing device. A memory controller can monitor runtime performance of the current memory mapping and dynamically change memory mappings at runtime based on monitored or observed performance of the memory mappings. The performance monitoring can be modified for any of a number of different granularities possible within the system, from the byte level to memory channel. | 09-19-2013 |
20130262814 | Mapping Memory Instructions into a Shared Memory Address Place - Embodiments of the present invention provide a method of a first processor using a memory resource associated with a second processor. The method includes receiving a memory instruction from a first processor process, wherein the memory instruction refers to a shared memory address (SMA) that maps to a second processor memory. The method also includes mapping the SMA to the second processor memory, wherein the mapping produces a mapping result and providing the mapping result to the first processor. | 10-03-2013 |
20130283004 | Virtualization with Multiple Shadow Page Tables - A computing system includes virtualization software including a guest operating system (OS). A method maintains, by the virtualization software layer, a first shadow page table for use in a kernel mode and a second shadow page table for use in a user mode. The virtualization software switches between using the first shadow page table and the second shadow page table when the guest OS switches between operating in the kernel mode and the user mode. | 10-24-2013 |
20130290669 | PHYSICAL MEMORY USAGE PREDICTION - In general, in one aspect, the invention relates to a system that includes memory and a prediction subsystem. The memory includes a first memgroup and a second memgroup, wherein the first memgroup comprises a first physical page and a second physical page, wherein the first physical page is a first subtype, and wherein the second physical page is a second subtype. The prediction subsystem is configured to obtain a status value indicating an amount of freed physical pages on the memory, store the status value in a sample buffer comprising a plurality of previous status values, determine, using the status value and the plurality of previous status values, a deficiency subtype state for the first subtype based on an anticipated need for the first subtype on the memory, and instruct, based on the determination, an allocation subsystem to coalesce the second physical page to the first subtype. | 10-31-2013 |
20130290670 | MEMORY RANGE PREFERRED SIZES AND OUT-OF-BOUNDS COUNTS - A system that includes a memory, a tilelet data structure entry, a first tile freelist, and an allocation subsystem. The memory includes a first tilelet on a first tile. The tilelet data structure entry includes a first tilelet preferred pagesize assigned to a first value. The first tile freelist for the first tile includes a first tile in-bounds page freelist, and a first tile out-of-bounds page freelist. The allocation subsystem is configured to detect that a first physical page is freed, store, in the first tile in-bounds page freelist, a first page data structure, detect that a second physical page is freed, store, in the first tile out-of-bounds page freelist, a second page data structure, and coalesce the memory using the second page and at least one of the physical pages associated with the plurality of out-of-bounds page data structures into a third physical page. | 10-31-2013 |
20130290671 | Emulating Execution of a Perform Frame Management Instruction - What is disclosed is a frame management function defined for a machine architecture of a computer system. In one embodiment, a frame management instruction is obtained which identifies a first and second general register. The first general register contains a frame management field having a key field with access-protection bits and a block-size indication. If the block-size indication indicates a large block then an operand address of a large block of data is obtained from the second general register. The large block of data has a plurality of small blocks each of which is associated with a corresponding storage key having a plurality of storage key access-protection bits. If the block size indication indicates a large block, the storage key access-protection bits of each corresponding storage key of each small block within the large block is set with the access-protection bits of the key field. | 10-31-2013 |
20130311746 | SHARED MEMORY ACCESS USING INDEPENDENT MEMORY MAPS - A method includes defining a first mapping, which translates between logical addresses and physical storage locations in a memory with a first mapping unit size, for accessing the memory by a first processing unit. A second mapping is defined, which translates between the logical addresses and the physical storage locations with a second mapping unit size that is different from the first mapping unit size, for accessing the memory by a second processing unit. Data is exchanged between the first and second processing units via the memory, while accessing the memory by the first processing unit using the first mapping and by the second processing unit using the second mapping. | 11-21-2013 |
20130311747 | Memory Mapping and Translation for Arbitrary Number of Memory Units - A method for address translation in a memory comprising a plurality of memory streaming units (MSUs), wherein n represents the number of MSUs and n is not a power of two, and wherein the memory further comprises a striped region, the method comprising determining an MSU from among the plurality of MSUs having a physical address (PA) in the striped region corresponding to a logical address (LA) comprising performing a modulo n operation on less than all the bits representing the LA; and transmitting the LA to the MSU. | 11-21-2013 |
20130311748 | System and Method for Storing Data in a Virtualized Memory System With Destructive Reads - A system and method for providing high-speed memory operations is disclosed. The technique uses virtualization of memory space to map a virtual address space to a larger physical address space wherein no memory bank conflicts will occur. The larger physical address space is used to prevent memory bank conflicts from occurring by moving the virtualized memory addresses of data being written to memory to a different location in physical memory that will eliminate a memory bank conflict. To improve memory performance destructive read operations are used when reading data but the data is written back into the physical memory in a later cycle. | 11-21-2013 |
20130311749 | METHOD FOR DISTRIBUTING DATA IN A TIERED STORAGE SYSTEM - A method for assigning data in a plurality of physical storage resources for an information handling system is disclosed. The plurality of physical storage resources includes a first tier and a second tier with a lower performance and cost relative to capacity than the first tier. A tier manager hosted on the information handling system and in electronic communication with the plurality of physical storage resources is configured to: determine a seek distance value, operation rate, operation size value, and elapsed time value for each page; and calculate a relative randomness value for each page using the seek distance value, operation rate, operation size value, and elapsed time value determined for each page. A classification module may assign a physical location for each page such that the relative randomness value for each page in the first tier is greater than the relative randomness value for each page in the second tier. | 11-21-2013 |
20130311750 | TRANSACTION LOG RECOVERY - The present disclosure includes methods for transaction log recovery in memory. One such method includes examining a number of entries saved in a transaction log to determine a write pattern, reading the memory based on the write pattern, updating the transaction log with information associated with data read from the memory based on the write pattern, and updating a logical address (LA) table using the transaction log. | 11-21-2013 |
20130326188 | INTER-CHIP MEMORY INTERFACE STRUCTURE - In an embodiment, a stacked package-on-package system has a memory die and a logic die. The memory die comprises a first memory and a second memory, each operated independently of the other, and each having an inter-chip interface electrically connected to the logic die. The logic die has two independent clock sources, one to provide a first clock signal to the first memory, and the other clock source to provide a second clock signal to the second memory. | 12-05-2013 |
20130339651 | MANAGING PAGE TABLE ENTRIES - Embodiments relate to managing page table entries in a processing system. A first page table entry (PTE) of a page table for translating virtual addresses to main storage addresses is identified. The page table includes a second page table entry contiguous with the second page table entry. It is determined whether the first PTE may be joined with the second PTE, based on the respective pages of main storage being contiguous. A marker is set in the page table for indicating that the main storage pages identified by the first PTE and second PTEs are contiguous. | 12-19-2013 |
20130339652 | Radix Table Translation of Memory - Embodiments relate to managing memory page tables in a processing system. A request to access a desired block of memory is received. The request includes an effective address that includes an effective segment identifier (ESID) and a linear address, the linear address including a most significant portion and a byte index. An entry in a buffer that includes the ESID of the effective address is located. Based on the entry including a radix page table pointer (RPTP), performing: using the RPTP to locate a translation table of a hierarchy of translation tables, using the located translation table to translate the most significant portion of the linear address to obtain an address of a block of memory, and based on the obtained address, performing the requested access to the desired block of memory. | 12-19-2013 |
20130339653 | Managing Accessing Page Table Entries - A system for accessing memory locations includes translating, by a processor, a virtual address to locate a first page table entry (PTE) in a page table. The first PTE includes a marker and an address of a page of main storage. It is determined whether a marker is set in the first PTE. The system identifies a large page size of a large page associated with the first PTE based on determining that the marker is set in the first PTE. The large page consists of contiguous pages of main storage. An origin address of the large page is determined based on determining that the marker is set in the first PTE. The virtual address is used to index into the large page at the origin address to access main storage. | 12-19-2013 |
20130339654 | Radix Table Translation of Memory - A method includes receiving a request to access a desired block of memory. The request includes an effective address that includes an effective segment identifier (ESID) and a linear address, the linear address comprising a most significant portion and a byte index. Locating an entry, in a buffer, the entry including the ESID of the effective address. Based on the entry including a radix page table pointer (RPTP), performing, using the RPTP to locate a translation table of a hierarchy of translation tables, using the located translation table to translate the most significant portion of the linear address to obtain an address of a block of memory, and based on the obtained address, performing the requested access to the desired block of memory. | 12-19-2013 |
20130346725 | STRUCTURING STORAGE BASED ON LATCH-FREE B-TREES - A request to modify an object in storage that is associated with one or more computing devices may be obtained, the storage organized based on a latch-free B-tree structure. A storage address of the object may be determined, based on accessing a mapping table that includes map indicators mapping logical object identifiers to physical storage addresses. A prepending of a first delta record to a prior object state of the object may be initiated, the first delta record indicating an object modification associated with the obtained request. Installation of a first state change associated with the object modification may be initiated via a first atomic operation on a mapping table entry that indicates the prior object state of the object. For example, the latch-free B-tree structure may include a B-tree like index structure over records as the objects, and logical page identifiers as the logical object identifiers. | 12-26-2013 |
20140006746 | VIRTUAL MEMORY ADDRESS RANGE REGISTER | 01-02-2014 |
20140013073 | USING LARGE FRAME PAGES WITH VARIABLE GRANULARITY - The page tables in existing art are modified to allow virtual address resolution by mapping to multiple overlapping entries, and resolving a physical address from the most specific entry. This enables more efficient use of system resources by allowing smaller frames to shadow larger frames. A page table is selected. When a virtual address in a request corresponds to an entry in the page table, which identifies a next page table associated with the large frame, a determination is made that the virtual address corresponds to an entry in the next page table, the entry in the next page table referencing a small frame overlay for the large frame. The virtual address is mapped to a physical address in the small frame overlay using data of the entry in the next page table. The physical address in a process-specific view of the large frame is returned. | 01-09-2014 |
20140025918 | Transactional Memory that Performs a Direct 32-bit Lookup Operation - A transactional memory (TM) receives a lookup command across a bus from a processor. The command includes a base address, a starting bit position, and a mask size. In response to the lookup command, the TM pulls an input value (IV). The TM uses the starting bit position and the mask size to select a portion of the IV. A first sub-portion of the portion of the IV and the base address are summed to generate a memory address. The memory address is used to read a word containing multiple result values (RVs) from memory. One RV from the word is selected using a multiplexing circuit and a second sub-portion of the portion of the IV. If the selected RV is a final value, then lookup operation is complete and the TM sends the RV to the processor, otherwise the TM performs another lookup operation based upon the selected RV. | 01-23-2014 |
20140025919 | Recursive Use of Multiple Hardware Lookup Structures in a Transactional Memory - A lookup engine of a transactional memory (TM) has multiple hardware lookup structures, each usable to perform a different type of lookup. In response to a lookup command, the lookup engine reads a first block of first information from a memory unit. The first information configures the lookup engine to perform a first type of lookup, thereby identifying a first result value. If the first result value is not a final result value, then the lookup engine uses address information in the first result value to read a second block of second information. The second information configures the lookup engine to perform a second type of lookup, thereby identifying a second result value. This process repeats until a final result value is obtained. The type of lookup performed is determined by the result value of the preceding lookup and/or type information of the block of information for the next lookup. | 01-23-2014 |
20140025920 | Transactional Memory that Performs a Direct 24-BIT Lookup Operation - A transactional memory (TM) receives a lookup command across a bus from a processor. Only final result values are stored in memory. The command includes a base address, a starting bit position, and mask size. In response to the lookup command, the TM pulls an input value (IV). A selecting circuit within the TM uses the starting bit position and mask size to select a portion of the IV. The portion of the IV and the base address are used to generate a memory address. The memory address is used to read a word containing multiple result values (RVs) from memory. One RV from the word is selected using a multiplexing circuit and a result location value (RLV) generated from the portion of the IV. A word selector circuit and arithmetic circuits are used to generate the memory address and RLV. The TM sends the selected RV to the processor. | 01-23-2014 |
20140025921 | MEMORY CONTROL METHOD UTILIZING MAIN MEMORY FOR ADDRESS MAPPING AND RELATED MEMORY CONTROL CIRCUIT - A memory control method, including: writing a write-in data which has a logical address into a write-in cache buffer; generating a write-in address mapping table which maps the logical address of the data to a physical address of a main memory, and writing the write-in address mapping table into a cached data mapping table write buffer; writing the write-in data into the main memory according to the write-in address mapping table; and when an available storage space of the cached data mapping table write buffer is reduced to reach a predetermined threshold, writing the address mapping table in the cached data mapping table write buffer into the main memory, and storing a corresponding main memory write-in address mapping table into a global mapping table buffer. | 01-23-2014 |
20140025922 | PROVIDING MULTIPLE QUIESCE STATE MACHINES IN A COMPUTING ENVIRONMENT - An aspect includes a method for operating on translation look-aside buffers (TLBs) in a multiprocessor environment including a plurality of logical partitions as zones. The method includes concurrently receiving a first quiesce request from a first processor of a first zone to quiesce processors of a first set of zones including the first zone and receiving a second quiesce request from a second processor of a second zone to quiesce processors of a second set of zones including the second zone. The second set of zones consists of separate zones from the first set of zones. Based on receiving the first quiesce request, only processors of the first set of zones are quiesced. Based on the processors of the first set of zones being quiesced, a first operation is performed on the TLBs. Based on the first operation being performed, the processors of the first set of zones are un-quiesced. | 01-23-2014 |
20140032874 | COMPUTING DEVICE AND VIRTUAL DEVICE CONTROL METHOD FOR CONTROLLING VIRTUAL DEVICE BY COMPUTING SYSTEM - A virtual device control method of a computing device which includes a nonvolatile memory is provided. The virtual device control method includes receiving a virtualization request; assigning a first part of the nonvolatile memory to a virtual memory; assigning a second part of the nonvolatile memory to a virtual storage; and generating a virtual device including the assigned virtual memory and virtual storage. | 01-30-2014 |
20140040593 | MULTIPLE SETS OF ATTRIBUTE FIELDS WITHIN A SINGLE PAGE TABLE ENTRY - A first processing unit and a second processing unit can access a system memory that stores a common page table that is common to the first processing unit and the second processing unit. The common page table can store virtual memory addresses to physical memory addresses mapping for memory chunks accessed by a job of an application. A page entry, within the common page table, can include a first set of attribute bits that defines accessibility of the memory chunk by the first processing unit, a second set of attribute bits that defines accessibility of the same memory chunk by the second processing unit, and physical address bits that define a physical address of the memory chunk. | 02-06-2014 |
20140052957 | TRANSLATION TABLE AND METHOD FOR COMPRESSED DATA - A translation table has entries that each include a share bit and a delta bit, with pointers that point to a memory block that includes reuse bits. When two translation table entries reference identical fragments in a memory block, one of the translation table entries is changed to refer to the same memory block referenced in the other translation table entry, which frees up a memory block. The share bit is set to indicate a translation table entry is sharing its memory block with another translation table entry. In addition, a translation table entry may include a private delta in the form of a pointer that references a memory fragment in the memory block that is not shared with other translation table entries. When a translation table has a private delta, its delta bit is set. | 02-20-2014 |
20140052958 | TRANSLATION TABLE AND METHOD FOR COMPRESSED DATA - A translation table has entries that each include a share bit and a delta bit, with pointers that point to a memory block that includes reuse bits. When two translation table entries reference identical fragments in a memory block, one of the translation table entries is changed to refer to the same memory block referenced in the other translation table entry, which frees up a memory block. The share bit is set to indicate a translation table entry is sharing its memory block with another translation table entry. In addition, a translation table entry may include a private delta in the form of a pointer that references a memory fragment in the memory block that is not shared with other translation table entries. When a translation table has a private delta, its delta bit is set. | 02-20-2014 |
20140059320 | SYNCHRONIZING A TRANSLATION LOOKASIDE BUFFER WITH AN EXTENDED PAGING TABLE - A processor including logic to execute an instruction to synchronize a mapping from a physical address of a guest of a virtualization based system (guest physical address) to a physical address of the host of the virtualization based system (host physical address), and stored in a translation lookaside buffer (TLB), with a corresponding mapping stored in an extended paging table (EPT) of the virtualization based system. | 02-27-2014 |
20140068224 | Block-level Access to Parallel Storage - The subject disclosure is directed towards one or more parallel storage components for parallelizing block-level input/output associated with remote file data. Based upon a mapping scheme, the file data is partitioned into a plurality of blocks in which each may be equal in size. A translator component of the parallel storage may determine a mapping between the plurality of blocks and a plurality of storage nodes such that at least a portion of the plurality of blocks is accessible in parallel. Such a mapping, for example, may place each block in a different storage node allowing the plurality of blocks to be retrieved simultaneously and in its entirety. | 03-06-2014 |
20140075148 | MEMORY UTILIZATION OF SPARSE PAGES - A method, system, and computer program product for improving memory utilization of sparse pages are provided in the illustrative embodiments. A set of virtual pages is identified. Each virtual page in the set of virtual pages is a sparse virtual page. The set of virtual pages includes a first sparse virtual page and a second sparse virtual page. At least a portion of data of the first sparse virtual page in the set of virtual pages is stored in a first physical page. The first physical page belongs to a set of consolidation physical pages, and the first physical page also stores at least a portion of the data of the second sparse virtual page. The first and the second sparse pages are mapped to the first physical page. | 03-13-2014 |
20140075149 | Storage Mechanism with Variable Block Size - A file system may access a logical unit by addressing storage space using a constant block size, but the underlying logical unit may physically store information using different block sizes for different types of files. Certain file types may be stored using large blocks sizes for performance, while other file types may be stored using smaller block sizes for storage efficiency. A storage management system may create the logical unit from different block extents on various storage devices, where each block extent may be created with different block sizes. The system may place a file in a block extent that may be appropriate for the file type, and may perform a translation between the file system's request for a specific block and the manner in which the block is stored on the media. | 03-13-2014 |
20140075150 | METHOD FOR GENERATING A DELTA FOR COMPRESSED DATA - A translation table has entries that each include a share bit and a delta bit, with pointers that point to a memory block that includes reuse bits. The share bit is set to indicate a translation table entry is sharing its memory block with another translation table entry. In addition, a translation table entry may include a private delta in the form of a pointer that references a memory fragment in the memory block that is not shared with other translation table entries, wherein the private delta references previously-stored content. When a translation table has a private delta, its delta bit is set. The private delta is generated by analyzing a data buffer for content that is similar to previously-stored content. | 03-13-2014 |
20140089630 | VIRTUAL ADDRESSING - A method of relating the user logical block address (LBA) of a page of user data to the physical block address (PBA) where the data is stored in a RAIDed architecture reduces to size of the tables by constraining the location to which data of a plurality of LBAs may be written. Chunks of data from a plurality of LBAs may be stored in a common page of memory and the common memory pages is described by a virtual block address (VBA) referencing the PBA, and each of the LBAs uses the same VBA to read the data. | 03-27-2014 |
20140101403 | Application-Managed Translation Cache - Mechanisms are provided, in a data processing system, for accessing a memory location in a physical memory of the data processing system. With these mechanisms, a request is received from an application to access a memory location specified by an effective address in an application address space. A translation is performed, at a user level of execution, of the effective address to a real address table index (RATI) value corresponding to the effective address. At a hardware level of execution, a lookup operation is performed that looks-up the RATI value in a real address table data structure maintained by trusted system level hardware of the data processing system, to identify a real address for accessing physical memory. A memory location in physical memory is thereafter accessed based on the identified real address. | 04-10-2014 |
20140101404 | SELECTABLE ADDRESS TRANSLATION MECHANISMS - An address translation capability is provided in which translation structures of different types are used to translate memory addresses from one format to another format. Multiple translation structure formats (e.g., multiple page table formats, such as hash page tables and hierarchical page tables) are concurrently supported in a system configuration, and the use of a particular translation structure format in translating an address is selectable. | 04-10-2014 |
20140108767 | METHOD AND SYSTEM FOR EXTENDING VIRTUAL ADDRESS SPACE OF PROCESS PERFORMED IN OPERATING SYSTEM - A method of extending a virtual address space of a process executed in an operating system includes selecting a virtual address range included in a virtual address space corresponding to the process and the number of a plurality of extended virtual address ranges, extending and thereby setting the virtual address space to a multi-virtual address space based on the selected virtual address range and the selected number of the plurality of extended virtual address ranges, and providing the multi-virtual address space to the process. | 04-17-2014 |
20140115296 | Remapping Memory Cells Based on Future Endurance Measurements - A method of operating a memory device that includes groups of memory cells is presented. The groups include a first group of memory cells. Each one of the groups has a respective physical address and is initially associated with a respective logical address. The device also includes an additional group of memory cells that has a physical address but is not initially associated with a logical address. In the method, a difference in the future endurance between the first group of memory cells and the additional group of memory cells is identified. When the difference in the future endurance between the first group and the additional group exceeds a predetermined threshold difference, the association between the first group and the logical address initially associated with the first group is ended and the additional group is associated with the logical address that was initially associated with the first group. | 04-24-2014 |
20140122827 | MANAGEMENT OF MEMORY USAGE USING USAGE ANALYTICS - An approach for managing memory usage in cloud and traditional environments using usage analytics is disclosed. The approach may be implemented in a computer infrastructure including a combination of hardware and software. The approach includes determining that space is available within one or more tables which have schema definitions with string fields having a predefined length. The approach further includes creating a virtual table and mapping the available space to the virtual table for population by one or more records. | 05-01-2014 |
20140122828 | Sharing address translation between CPU and peripheral devices - A method for memory access includes maintaining in a host memory, under control of a host operating system running on a central processing unit (CPU), respective address translation tables for multiple processes executed by the CPU. Upon receiving, in a peripheral device, a work item that is associated with a given process, having a respective address translation table in the host memory, and specifies a virtual memory address, the peripheral device translates the virtual memory address into a physical memory address by accessing the respective address translation table of the given process in the host memory. The work item is executed in the peripheral device by accessing data at the physical memory address in the host memory. | 05-01-2014 |
20140129795 | CONFIGURABLE I/O ADDRESS TRANSLATION DATA STRUCTURE - In response to a determination to allocate additional storage, within a real address space employed by a system memory of a data processing system, for translation control entries (TCEs) that translate addresses from an input/output (I/O) address space to the real address space, a determination is made whether or not a first real address range contiguous with an existing TCE data structure is available for allocation. In response to determining that the first real address range is available for allocation, the first real address range is allocated for storage of TCEs, and a number of levels in the TCE data structure is retained. In response to determining that the first real address range is not available for allocation, a second real address range discontiguous with the existing TCE data structure is allocated for storage of the TCEs, and a number of levels in the TCE data structure is increased. | 05-08-2014 |
20140129796 | TRANSLATION OF INPUT/OUTPUT ADDRESSES TO MEMORY ADDRESSES - An address provided in a request issued by an adapter is converted to an address directly usable in accessing system memory. The address includes a plurality of bits, in which the plurality of bits includes a first portion of bits and a second portion of bits. The second portion of bits is used to index into one or more levels of address translation tables to perform the conversion, while the first portion of bits are ignored for the conversion. The first portion of bits are used to validate the address. | 05-08-2014 |
20140129797 | CONFIGURABLE I/O ADDRESS TRANSLATION DATA STRUCTURE - In response to a determination to allocate additional storage, within a real address space employed by a system memory of a data processing system, for translation control entries (TCEs) that translate addresses from an input/output (I/O) address space to the real address space, a determination is made whether or not a first real address range contiguous with an existing TCE data structure is available for allocation. In response to determining that the first real address range is available for allocation, the first real address range is allocated for storage of TCEs, and a number of levels in the TCE data structure is retained. In response to determining that the first real address range is not available for allocation, a second real address range discontiguous with the existing TCE data structure is allocated for storage of the TCEs, and a number of levels in the TCE data structure is increased. | 05-08-2014 |
20140136810 | LOGICAL SECTOR MAPPING IN A FLASH STORAGE ARRAY - A system and method for efficiently performing user storage virtualization for data stored in a storage system including a plurality of solid-state storage devices. A data storage subsystem supports multiple mapping tables. Records within a mapping table are arranged in multiple levels. Each level stores pairs of a key value and a pointer value. The levels are sorted by time. New records are inserted in a created newest (youngest) level. No edits are performed in-place. All levels other than the youngest may be read only. The system may further include an overlay table which identifies those keys within the mapping table that are invalid. | 05-15-2014 |
20140149712 | Rule-Based Virtual Address Translation For Accessing Data - In one embodiment, rule-based virtual address translation is performed for accessing data (e.g., reading and/or writing data) typically stored in different manners and/or locations among one or more memories, such as, but not limited to, in packet switching devices. A virtual address is matched against a set of predetermined rules to identify one or more storing description parameters. These storing description parameters determine in which particular memory unit(s) and/or how the data is stored. Thus, different portions of a data structure (e.g., table) can be stored in different memories and/or using different storage techniques. The virtual address is converted to a lookup address based on the identified storing description parameter(s). One or more read or write operations in one or more particular memory units is performed based on the lookup address said converted from the virtual address. | 05-29-2014 |
20140164731 | TRANSLATION MANAGEMENT INSTRUCTIONS FOR UPDATING ADDRESS TRANSLATION DATA STRUCTURES IN REMOTE PROCESSING NODES - Translation management instructions are used in a multi-node data processing system to facilitate remote management of address translation data structures distributed throughout such a system. Thus, in multi-node data processing systems where multiple processing nodes collectively handle a workload, the address translation data structures for such nodes may be collectively managed to minimize translation misses and the performance penalties typically associated therewith. | 06-12-2014 |
20140173243 | EFFICIENT MANAGEMENT OF COMPUTER MEMORY USING MEMORY PAGE ASSOCIATIONS AND MEMORY COMPRESSION - A method for managing memory operations includes reading a first memory page from a storage device responsive to a request for the first memory page. The first memory page is stored to a system memory. Based on a pre-established set of association rules, one or more associated memory pages are identified that are related to the first memory page. The associated memory pages are read from the storage device and compressed to generate corresponding compressed associated memory pages. The compressed associated memory pages are also stored to the system memory to enable faster access to the associated memory pages during processing of the first memory page. The compressed associated memory pages are individually decompressed in response to the particular page being required for use during processing. | 06-19-2014 |
20140181457 | Write Endurance Management Techniques in the Logic Layer of a Stacked Memory - A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations. | 06-26-2014 |
20140181458 | DIE-STACKED MEMORY DEVICE PROVIDING DATA TRANSLATION - A die-stacked memory device incorporates a data translation controller at one or more logic dies of the device to provide data translation services for data to be stored at, or retrieved from, the die-stacked memory device. The data translation operations implemented by the data translation controller can include compression/decompression operations, encryption/decryption operations, format translations, wear-leveling translations, data ordering operations, and the like. Due to the tight integration of the logic dies and the memory dies, the data translation controller can perform data translation operations with higher bandwidth and lower latency and power consumption compared to operations performed by devices external to the die-stacked memory device. | 06-26-2014 |
20140189284 | SUB-BLOCK BASED WEAR LEVELING - Embodiments of the invention describe an apparatus, system and method for sub-block based wear leveling for memory devices. Embodiments of the invention may receive a write request to a physical memory address including a physical block address and a physical sub-block address. An address remapping table is accessed to translate the physical block address to a memory device block address to locate a plurality of memory device sub-blocks. A plurality of sub-block activity counters are accessed, each sub-block activity counter associated with one of the memory device sub-blocks. One of the plurality of memory device sub-blocks is selected to store write data of the write request based, at least in part, on values of the plurality of sub-block activity counters, and the value of the sub-block activity counter associated with the selected memory device sub-block is updated. | 07-03-2014 |
20140201493 | OPTIMIZING LARGE PAGE PROCESSING - Embodiments of the disclosure include a method for optimizing large page processing. The method includes receiving an indication that a real memory includes a first page. The first page includes a plurality of smaller pages. The method also includes determining a page frame table entry associated with a first smaller page of the first page and storing data associated with the first page in the page frame table entry associated with the first smaller page. The page frame table entry associated with the first smaller page of the first page is a data repository for the plurality of smaller pages of the first page. | 07-17-2014 |
20140208060 | SYSTEMS AND METHODS FOR ACCESSING MEMORY - Methods of mapping memory cells to applications, methods of accessing memory cells, systems, and memory controllers are described. In some embodiments, a memory system including multiple physical channels is mapped into regions, such that any region spans each physical channel of the memory system. Applications are allocated memory in the regions, and performance and power requirements of the applications are associated with the regions. Additional methods and systems are also described. | 07-24-2014 |
20140208061 | LOCATING DATA IN NON-VOLATILE MEMORY - Systems and methods presented herein provide for locating data in non-volatile memory by decoupling a mapping unit size from restrictions such as the maximum size of a reducible unit to provide efficient mapping of larger mapping units. In one embodiment, a method comprises mapping a logical page address in a logical block address space to a read unit address and a number of read units in the non-volatile memory. The method also comprises mapping data of the logical page address to a plurality of variable-sized pieces of data spread across the number of read units starting at the read unit address in the non-volatile memory. | 07-24-2014 |
20140208062 | STORAGE ADDRESS SPACE TO NVM ADDRESS, SPAN, AND LENGTH MAPPING/CONVERTING - Storage address space to NVM address, span, and length mapping/converting is performed by a controller for a solid-state storage system that includes a mapping function to convert a logical block address from a host to an address of a smallest read unit of the NVM. The mapping function provides span and length information corresponding to the logical block address. The span information specifies a number of contiguous smallest read units to read to provide data (corresponding to the logical block address) to the host. The length information specifies how much of the contiguous smallest read units relate to the data provided to the host. The converted address and the length information are usable to improve recycling of no longer needed (e.g. released) portions of the NVM, and usable to facilitate recovery from outages and/or unintended interruptions of service. | 07-24-2014 |
20140208063 | POLYMORPH TABLE WITH SHARED COLUMNS - For managing a database in a data-processing system, a polymorph table and a mapping structure are provided. The polymorph table includes a discrimination column and a total number of columns of each type equal to a maximum of the virtual columns of the type. The mapping structure stores information mapping each virtual column to a polymorph column of the same type. A virtual access request is received based on one of the virtual columns of one of the virtual tables. Selected mapping information is retrieved that maps each selected virtual column to one of the polymorph columns. The virtual access request is converted into a polymorph access request according to an identifier of the selected virtual table and the selected mapping information. The polymorph table is accessed according to the polymorph access request. | 07-24-2014 |
20140223136 | Lookup Tables Utilizing Read Only Memory and Combinational Logic - The disclosure is directed to a system and method for accessing one or more values of a lookup table. In some embodiments, one or more read only memory devices are configured for storing a first plurality of values of the lookup table, and one or more combinational logic circuits are configured for accessing a second plurality of values of the lookup table. At least one of hardware area and timing pressures are mitigated through various storage and access schemes. | 08-07-2014 |
20140244965 | METHOD AND SYSTEM FOR SIMPLIFIED ADDRESS TRANSLATION SUPPORT FOR STATIC INFINIBAND HOST CHANNEL ADAPTOR STRUCTURES - A method for optimized address pre-translation for a host channel adapter (HCA) static memory structure is disclosed. The method involves determining whether the HCA static memory structure spans a contiguous block of physical address space, when the HCA static memory structure spans the contiguous block of physical address space, requesting a translation from a guest physical address (GPA) to a machine physical address (MPA) of the HCA static memory structure, storing a received MPA corresponding to the HCA static memory structure in an address control and status register (CSR) associated with the HCA static memory structure, marking the received MPA stored in the address CSR as a pre-translated address, and using the pre-translated MPA stored in the address CSR when a request to access the static memory structure is received. | 08-28-2014 |
20140244966 | PACKET PROCESSING MATCH AND ACTION UNIT WITH STATEFUL ACTIONS - A packet processing block. The block comprises an input for receiving data in a packet header vector, where the vector comprises data values representing information for a packet. The block also comprises circuitry for performing packet match operations in response to at least a portion of the packet header vector and data stored in a match table and circuitry for performing one or more actions in response to a match detected by the circuitry for performing packet match operations. The one or more actions comprise modifying the data values representing information for a packet. The block also comprises at least one stateful memory comprising stateful memory data values. The one or more actions includes various stateful actions for reading stateful memory, modifying data values representing information for a packet, as a function of the stateful memory data values; and storing modified stateful memory data value back into the stateful memory. | 08-28-2014 |
20140258675 | MEMORY CONTROLLER AND MEMORY SYSTEM - A memory controller according to the embodiment includes a front-end unit that issues an invalidation command in response to a command from outside of the memory controller, the command including a logical address, an address translation unit that stores a correspondence relationship between the logical and a physical address, an invalidation command processing unit that, when the invalidation command is received, registers the logical address associated with the invalidation command as an invalidation registration region in an invalidation registration unit and issues a notification to the front-end unit, and an internal processing unit that dissolves a correspondence relationship between the logical address registered in the invalidation registration unit and the physical address in the address translation unit in a predetermined order by referencing the logical address registered in the invalidation registration unit. The front-end unit transmits completion command which indicates the completion of the command in response to the notification. | 09-11-2014 |
20140281353 | HARDWARE-BASED PRE-PAGE WALK VIRTUAL ADDRESS TRANSFORMATION - An apparatus includes a processor and a virtual address transformation unit coupled with the processor. The virtual address transformation unit includes a register. The virtual address transformation unit is configured to receive an indication of a virtual address and read, from the register, a current page size of a plurality of available page sizes. The virtual address transformation unit is also configured to determine a shift amount based, at least in part, on the current page size and perform a bit shift of the virtual address, wherein the virtual address is bit shifted by, at least, the determined shift amount. | 09-18-2014 |
20140281354 | CONTINUOUS RUN-TIME INTEGRITY CHECKING FOR VIRTUAL MEMORY - A run-time integrity checking (RTIC) method compatible with memory having at least portions that store data that is changed over time or at least portions configured as virtual memory is provided. For example, the method may comprise storing a table of page entries and accessing the table of page entries by, as an example, an operating system or, as another example, a hypervisor to perform RTIC on memory in which, as an example, an operating system, as another example, a hypervisor, or, as yet another example, application software is stored. The table may, for example, be stored in secure memory or in external memory. The page entry comprises a hash value for the page and a hash valid indicator indicating the validity status of the hash value. The page entry may further comprise a residency indicator indicating a residency status of the memory page. | 09-18-2014 |
20140281355 | VIRTUAL STORAGE POOL - Virtual storage pool creation is simplified by allowing a user to specify what devices to include in virtual storage pool by physical location. The virtual storage pool may be automatically generated based on the simplified user specifications. The user may specify the virtual pool configuration in a configuration file. A configuration application generates the virtual storage pool based on the configuration file. The configuration application utilizes the physical locations of block devices contained in the configuration file to generate the pool. As a result, virtual pool configuration and creation is automated, more efficient and is less error prone than previous methods that involve manually linking between physical device locations and computer generated names. | 09-18-2014 |
20140281356 | MICROCONTROLLER FOR MEMORY MANAGEMENT UNIT - One embodiment of the present invention includes a microcontroller coupled to a memory management unit (MMU). The MMU is coupled to a page table included in a physical memory, and the microcontroller is configured to perform one or more virtual memory operations associated with the physical memory and the page table. In operation, the microcontroller receives a page fault generated by the MMU in response to an invalid memory access via a virtual memory address. To remedy such a page fault, the microcontroller performs actions to map the virtual memory address to an appropriate location in the physical memory. By contrast, in prior-art systems, a fault handler would typically remedy the page fault. Advantageously, because the microcontroller executes these tasks locally with respect to the MMU and the physical memory, latency associated with remedying page faults may be decreased. Consequently, overall system performance may be increased. | 09-18-2014 |
20140281357 | COMMON POINTERS IN UNIFIED VIRTUAL MEMORY SYSTEM - A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping. | 09-18-2014 |
20140281358 | MIGRATION SCHEME FOR UNIFIED VIRTUAL MEMORY SYSTEM - A system for managing virtual memory. The system includes a first processing unit configured to execute a first operation that references a first virtual memory address. The system also includes a first memory management unit (MMU) associated with the first processing unit and configured to generate a first page fault upon determining that a first page table that is stored in a first memory unit associated with the first processing unit does not include a mapping corresponding to the first virtual memory address. The system further includes a first copy engine associated with the first processing unit. The first copy engine is configured to read a first command queue to determine a first mapping that corresponds to the first virtual memory address and is included in a first page state directory. The first copy engine is also configured to update the first page table to include the first mapping. | 09-18-2014 |
20140281359 | APPARATUS AND METHOD FOR REFERENCING DENSE AND SPARSE INFORMATION IN MULTI-DIMENSIONAL TO LINEAR ADDRESS SPACE TRANSLATION - A translation system can translate a storage request having multiple fields to a physical address using the fields as keys to traverse a map. The map can be made of nodes that include one or more node entries. The node entries can be stored in a hashed storage area or sorted storage area of a node. A hashed storage area can enable a quick lookup of densely addressed information by using a portion of the key to determine a location of a node entry. A sorted storage area can enable compact storage of sparse information by storing node entries that currently exist and allowing the entries to be searched. By offering both types of storage in a node, a node can be optimized for both dense and sparse information. A node entry can include a link to a next node or the physical address for the storage request. | 09-18-2014 |
20140281360 | APPARATUS AND METHOD FOR INSERTION AND DELETION IN MULTI-DIMENSIONAL TO LINEAR ADDRESS SPACE TRANSLATION - A translation system can translate a storage request to a physical address using fields as keys to traverse a map of nodes with node entries. A node entry can include a link to a next node or a physical address. Using a portion of the key as noted in node metadata, a node entry can be determined. When adding node entries to a node, a node utilization can exceed a threshold value. A new node can be created such that node entries are split between the original and new node. Node metadata of the parent node, new node and original node can be revised to identify which parts of the key are used to identify a node entry. When removing node entries from a node, node utilization can cross a minimum threshold value. Node entries from the node can be merged with a sibling, or the map can be rebalanced. | 09-18-2014 |
20140281361 | NONVOLATILE MEMORY DEVICE AND RELATED DEDUPLICATION METHOD - A nonvolatile memory device comprises an interface configured to receive write data and a logical address of the write data, a data storage device comprising multiple physical blocks and configured to store an address mapping table array, and a controller configured to selectively load at least one address mapping table from the address mapping table array based on the logical address. The controller performs a deduplication operation for the write data by comparing the write data with data stored in a physical block having a physical address in the loaded address mapping table, to the exclusion of data stored in other physical blocks. | 09-18-2014 |
20140281362 | MEMORY ALLOCATION IN A SYSTEM USING MEMORY STRIPING - A system and associated methods are disclosed for allocating memory in a system providing translation of virtual memory addresses to physical memory addresses in a parallel computing system using memory striping. One method comprises: receiving a request for memory allocation, identifying an available virtually-contiguous physically-non-contiguous memory region (VCPNCMR) of at least the requested size, where the VCPNCMR is arranged such that physical memory addresses for the VCPNCMR may be derived from a corresponding virtual memory addresses by shifting a contiguous set of bits of the virtual memory address in accordance with information in a matching row of a virtual memory address matching table, and combining the shifted bits with high-order physical memory address bits also associated with the determined matching row and with low-order bits of the virtual memory address, and providing to the requesting process a starting address of the identified VCPNCMR. | 09-18-2014 |
20140297990 | MEMORY ADDRESS TRANSLATION - The present disclosure includes devices, systems, and methods for memory address translation. One or more embodiments include a memory array and a controller coupled to the array. The array includes a first table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a data segment stored in the array and a logical address. The controller includes a second table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a record in the first table and a logical address. The controller also includes a third table having a number of records, wherein each record includes a number of entries, wherein each entry includes a physical address corresponding to a record in the second table and a logical address. | 10-02-2014 |
20140310500 | PAGE CROSS MISALIGN BUFFER - The present application describes embodiments of a method and apparatus including a page cross misalign buffer. Some embodiments of the apparatus include a store queue for a plurality of entries configured to store information associated with store instructions. A respective entry in the store queue can store a first portion of information associated with a page crossing store instruction. Some embodiments of the apparatus also include one or more buffers configured to store a second portion of information associated with the page crossing store instruction. | 10-16-2014 |
20140310501 | APPARATUS AND METHOD FOR CALCULATING PHYSICAL ADDRESS OF A PROCESSOR REGISTER - An apparatus and method for calculating a physical address of a register in a processor are provided. The apparatus includes an offset calculator configured to calculate an offset between the physical address and a logical address of the register, based on a current iteration number and a size of a rotating register; an address calculator configured to calculate the physical address of the register by adding the calculated offset to the logical address of the register; and an address corrector configured to output a final physical address of the register based on the calculated physical address and the size of the rotating register. | 10-16-2014 |
20140317374 | LOGICAL ADDRESS TRANSLATION - The present disclosure includes methods for logical address translation, methods for operating memory systems, and memory systems. One such method includes receiving a command associated with a LA, wherein the LA is in a particular range of LAs and translating the LA to a physical location in memory using an offset corresponding to a number of physical locations skipped when writing data associated with a range of LAs other than the particular range. | 10-23-2014 |
20140317375 | SYSTEM AND METHOD TO PRIORITIZE LARGE MEMORY PAGE ALLOCATION IN VIRTUALIZED SYSTEMS - The prioritization of large memory page mapping is a function of the access bits in the L1 page table. In a first phase of operation, the number of set access bits in each of the L1 page tables is counted periodically and a current count value is calculated therefrom. During the first phase, no pages are mapped large even if identified as such. After the first phase, the current count value is used to prioritize among potential large memory pages to determine which pages to map large. The system continues to calculate the current count value even after the first phase ends. When using hardware assist, the access bits in the nested page tables are used and when using software MMU, the access bits in the shadow page tables are used for large page prioritization. | 10-23-2014 |
20140325179 | SYSTEM AND METHOD FOR WRITING PILOT DATA INTERSPERSED WITH USER DATA FOR ESTIMATING DISTURBANCE EXPERIENCED BY USER DATA - A system including a write module to write pilot data at predetermined locations in a page of memory cells that are interspersed with user data in the page. The pilot data has a first predetermined pattern and provides an indication of a disturbance experienced by the user data due to noise and a read, write, or erase operation performed on the page. A read module reads data from the predetermined locations subsequent to writing the pilot data. A signal processing module compares the data read from the predetermined locations with the pilot data and estimates, based on the comparison of the data read from the predetermined locations in the page with the pilot data, and the first predetermined pattern of the pilot data, the disturbance experienced by the user data due to the noise and the read, write, or erase operation performed on the page. | 10-30-2014 |
20140331023 | MULTI-CORE PAGE TABLE SETS OF ATTRIBUTE FIELDS - A device includes a memory that stores a first page table that includes a first page table entry, wherein the first page table entry further includes a physical address, an alternative location associated with the page table entry, and a physical page of memory associated with the physical address. A first processing unit is configured to: read the first page table entry, and determine the physical address from the first page table entry. The second processing unit is configured to: read the physical address from the first page table entry, determine second page attribute data from the alternative location, wherein the second page attribute data define one or more accessibility attributes of the physical page of memory for the second processing unit, and access the physical page of memory associated with the physical address according to the one or more accessibility attributes. | 11-06-2014 |
20140344548 | Stored Data Analysis - A system comprises a hashing logic, which executes instructions to convert raw data into a first logical address and payload data, where the first logical address describes metadata about the payload data. A hardware translation unit executes instructions to translate the first logical address into a first physical address on a storage device. A hardware load/storage unit stores the first logical address and the payload data at the first physical address on the storage device. A hardware exclusive OR (XOR) unit compares two logical address vectors to derive a Hamming distance between the two logical address vectors. A hardware retrieval unit retrieves other payload data that is stored at a second physical address whose second logical address is within a predefined Hamming distance from the first logical address, thus allowing payload data from the two logical addresses to be grouped/associated with one another. | 11-20-2014 |
20140351552 | WORKING SET SWAPPING USING A SEQUENTIALLY ORDERED SWAP FILE - Techniques described enable efficient swapping of memory pages to and from a working set of pages for a process through the use of large writes and reads of pages to and from sequentially ordered locations in secondary storage. When writing pages from a working set of a process into secondary storage, the pages may be written into reserved, contiguous locations in a dedicated swap file according to a virtual address order or other order. Such writing into sequentially ordered locations enables reading in of clusters of pages in large, sequential blocks of memory, providing for more efficient read operations to return pages to physical memory. | 11-27-2014 |
20140372726 | MEMORY MANAGEMENT METHOD AND APPARATUS - A method for managing memory using a virtual memory manager includes receiving a memory allocation request, allocating memory of a physical address space in response to the memory allocation request, mapping an address value of the memory allocated in the physical address space to consecutive primary virtual address space, and mapping the address value of the primary virtual address space to one of a first and second secondary virtual address spaces to process a new memory allocation request in a situation where memory a fragmentation occurs. Other embodiments are also disclosed. The methods and apparatuses of the present disclosure are capable of moving active memory blocks of the fragmented virtual memory space to another virtual memory space to resolve the memory fragmentation even when a memory fragmentation occurs. | 12-18-2014 |
20140380018 | Power Logic For Memory Address Conversion - In an embodiment, a processor includes a plurality of cores. Each core includes conversion power logic to receive an instruction including an untranslated memory address, determine whether a code segment (CS) base address is equal to zero, and in response to a determination that the CS base address is equal to zero, execute the instruction using the untranslated memory address. Other embodiments are described and claimed. | 12-25-2014 |
20150012722 | IDENTIFICATION OF PAGE SHARING OPPORTUNITIES WITHIN LARGE PAGES - Memory performance in a computer system that implements large page mapping is improved even when memory is scarce by identifying page sharing opportunities within the large pages at the granularity of small pages and breaking up the large pages so that small pages within the large page can be freed up through page sharing. In addition, the number of small page sharing opportunities within the large pages can be used to estimate the total amount of memory that could be reclaimed through page sharing. | 01-08-2015 |
20150032988 | REGULAR EXPRESSION MEMORY REGION WITH INTEGRATED REGULAR EXPRESSION ENGINE - A method and circuit arrangement selectively perform regular expression matching in connection with accessing data with a processing unit based upon one or more regular expression matching-related attributes stored in a memory address translation data structure such as an Effective To Real Translation (ERAT) or Translation Lookaside Buffer (TLB). A regular expression matching-related attribute in such a data structure may be used to control whether data being communicated between the processing unit and a communications bus is routed through an expression engine integrated with the processing unit such that regular expression matching may be performed in association with the data communication. | 01-29-2015 |
20150058593 | MERGING DIRECT MEMORY ACCESS WINDOWS - A computing device may merge two translation tables used when performing a DMA operation into a single, combined translation table. To merge the translation tables, the computing device may update a register in the IOMMU to include a pointer to the combined translation table. In addition, the IOMMU may clear one of the registers from having a pointer to one of the merged translation table. Doing so means the entries in this translation table are now no longer assigned. The IOMMU may update the register with the pointer to the combined translation table to include the unassigned entries in the combined translation table. In this manner, the entries from the two translation tables are merged into the single, combined table. The combined translation table may be owned or assigned to a service provider that originally owned one of the merged translation tables or to a completely different service provider. | 02-26-2015 |
20150058594 | SPLITTING DIRECT MEMORY ACCESS WINDOWS - A computing device may split a translation table used when performing a DMA operation into two different translation tables. To split the translation table, the computing device may update the registers in the IOMMU to include pointers to the two different translation tables. For example, the IOMMU may update one register to point to the same starting address as the original translation table but assign a shorter length (i.e., fewer entries) to that table. The extra entries may then be used to form the other translation table by adding a new pointer to one of the IOMMU registers. The two translation tables may be owned by the same service provider or two different service providers. Alternatively, the computing device may assign the two tables to the same service provider which in turn assigns the tables to respective client devices executed by the service provider. | 02-26-2015 |
20150058595 | Systems and Methods for Implementing Dynamically Configurable Perfect Hash Tables - Hardware circuitry may evaluate minimal perfect hash functions mapping keys to addresses in lookup tables. The circuitry may include primary hash function sub-circuits that apply linear hash functions to input key values (using carry-free arithmetic) to produce primary hash values. Each sub-circuit may multiply bit vectors representing key values by a bit matrix and add a constant bit vector to the result. The circuitry may include a secondary hash function sub-circuit that generates secondary hash values by aggregating values associated with multiple primary hash values using signed, unsigned, or modular integer addition, or bit-wise XOR operations. Secondary hash values may be usable to access data values in the lookup table that are associated with particular input key values. The circuitry may determine the validity of input keys and may alter the configuration or contents of the lookup tables. The hash function sub-circuits may include programmable hash tables. | 02-26-2015 |
20150058596 | MERGING DIRECT MEMORY ACCESS WINDOWS - A computing device may merge two translation tables used when performing a DMA operation into a single, combined translation table. To merge the translation tables, the computing device may update a register in the IOMMU to include a pointer to the combined translation table. In addition, the IOMMU may clear one of the registers from having a pointer to one of the merged translation table. Doing so means the entries in this translation table are now no longer assigned. The IOMMU may update the register with the pointer to the combined translation table to include the unassigned entries in the combined translation table. In this manner, the entries from the two translation tables are merged into the single, combined table. The combined translation table may be owned or assigned to a service provider that originally owned one of the merged translation tables or to a completely different service provider. | 02-26-2015 |
20150058597 | SPLITTING DIRECT MEMORY ACCESS WINDOWS - A computing device may split a translation table used when performing a DMA operation into two different translation tables. To split the translation table, the computing device may update the registers in the IOMMU to include pointers to the two different translation tables. For example, the IOMMU may update one register to point to the same starting address as the original translation table but assign a shorter length (i.e., fewer entries) to that table. The extra entries may then be used to form the other translation table by adding a new pointer to one of the IOMMU registers. The two translation tables may be owned by the same service provider or two different service providers. Alternatively, the computing device may assign the two tables to the same service provider which in turn assigns the tables to respective client devices executed by the service provider. | 02-26-2015 |
20150067296 | I/O MEMORY MANAGEMENT UNIT PROVIDING SELF INVALIDATED MAPPING - A memory management unit for 110 devices uses page table entries to translate virtual addresses to physical addresses. The page table entries include removal rules allowing the I/O memory management unit to delete page table entries without CPU involvement significantly reducing the CPU overhead involved in virtualized I/O data transactions. | 03-05-2015 |
20150067297 | DIRECT MEMORY ACCESS (DMA) ADDRESS TRANSLATION WITH A CONSECUTIVE COUNT FIELD - DMA translation table entries include a consecutive count (CC) field that indicates how many subsequent translation table entries point to successive real page numbers. A DMA address translation mechanism stores a value in the CC field when a translation table entry is stored, and updates the CC field in other affected translation table entries as well. When a translation table entry is read, and the CC field is non-zero, the DMA controller can use multiple RPNs from the access to the single translation table entry. Thus, if a translation table entry has a value of 2 in the CC field, the DMA address translation mechanism knows it can access the real page number (RPN) corresponding to the translation table entry, and also knows it can access the two subsequent RPNs without the need of reading the next two subsequent translation table entries. | 03-05-2015 |
20150082001 | TECHNIQUES FOR SUPPORTING FOR DEMAND PAGING - One embodiment of the present invention includes techniques to support demand paging across a processing unit. Before a host unit transmits a command to an engine that does not tolerate page faults, the host unit ensures that the virtual memory addresses associated with the command are appropriately mapped to physical memory addresses. In particular, if the virtual memory addresses are not appropriately mapped, then the processing unit performs actions to map the virtual memory address to appropriate locations in physical memory. Further, the processing unit ensures that the access permissions required for successful execution of the command are established. Because the virtual memory address mappings associated with the command are valid when the engine receives the command, the engine does not encounter page faults upon executing the command. Consequently, in contrast to prior-art techniques, the engine supports demand paging regardless of whether the engine is involved in remedying page faults. | 03-19-2015 |
20150089184 | Collapsed Address Translation With Multiple Page Sizes - A computer system that supports virtualization may maintain multiple address spaces. Each guest operating system employs guest virtual addresses (GVAs), which are translated to guest physical addresses (GPAs). A hypervisor, which manages one or more guest operating systems, translates GPAs to root physical addresses (RPAs). A merged translation lookaside buffer (MTLB) caches translations between the multiple addressing domains, enabling faster address translation and memory access. The MTLB can be logically addressable as multiple different caches, and can be reconfigured to allot different spaces to each logical cache. Further, a collapsed TLB provides an additional cache storing collapsed translations derived from the MTLB. | 03-26-2015 |
20150095609 | APPARATUS AND METHOD FOR COMPRESSING A MEMORY ADDRESS - An apparatus and method for converting between a full memory address and a compressed memory address. For example, one embodiment comprises one or more translation tables having a plurality of translation entries, each translation entry identifiable with a pointer value and storing a portion of a full memory address usable within the processor to address data and instructions; and address translation logic to use the translation tables to convert between the full address and a compressed version of the full address, the compressed version of the full address having the pointer value substituted for the portion of the full memory address, wherein a first portion of the processor performs operations using the compressed version of the full address and a second portion of the processor performs operations using the full address. | 04-02-2015 |
20150127922 | PHYSICAL ADDRESS MANAGEMENT IN SOLID STATE MEMORY - A storage system includes a memory controller connected to a solid state memory device and a read status table that tracks a pending read from the solid state memory device and a physical address of the solid state memory device that is associated with the pending read. The memory controller releases the physical address for reassignment when the read status table indicates that no pending reads are associated with the physical address. In certain embodiments, the read status table may be included within the memory controller. In certain embodiments, subsequent to the release of the physical address, erase operations may erase data at the physical address and the physical address may be reassigned to a new logical address by ensuing host write operations. | 05-07-2015 |
20150127923 | THIN PROVISIONING IN A STORAGE DEVICE - An apparatus, method, and computer-readable storage medium for allowing a block-addressable storage device to provide a sparse address space to a host computer. The storage device exports an address space to a host computing device which is larger than the storage capacity of the storage device. The storage device translates received file system object addresses in the larger address space to physical locations in the smaller address space of the storage device. This allows the host computing device more flexibility in selecting addresses for file system objects which are stored on the storage device. | 05-07-2015 |
20150134930 | Using Shared Virtual Memory Resources for Performing Memory-Mapping - Functionality is described herein for memory-mapping an information unit (such as a file) into virtual memory by associating shared virtual memory resources with the information unit. The functionality then allows processes (or other entities) to interact with the information unit via the shared virtual memory resources, as opposed to duplicating separate private instances of the virtual memory resources for each process that requests access to the information unit. The functionality also uses a single level of address translation to convert virtual addresses to corresponding physical addresses. In one implementation, the information unit is stored on a bulk-erase type block storage device, such as a flash storage device; here, the single level of address translation incorporates any address mappings identified by wear-leveling and/or garbage collection processing, eliminating the need for the storage device to perform separate and independent address mappings. | 05-14-2015 |
20150143072 | METHOD IN A MEMORY MANAGEMENT UNIT FOR MANAGING ADDRESS TRANSLATIONS IN TWO STAGES - A memory management unit (MMU) may manage address translations. The MMU may obtain a first intermediate physical address (IPA) based on a first virtual address (VA) relating to a first memory access request. The MMU may identify, based on the first IPA, a first memory page entry in a second address translation table. The MMU may store, in a second cache memory, a first IPA-to-PA translation based on the identified first memory page entry. The MMU may store, in the second cache memory and in response to the identification of the first memory page entry, one or more additional IPA-to-PA translations that are based on corresponding one or more additional memory page entries in the second address translation table. The one or more additional memory page entries may be contiguous to the first memory page entry. | 05-21-2015 |
20150149742 | MEMORY UNIT AND METHOD - A memory unit and method are disclosed. The memory unit comprises: at least one controller interfaced with at least one corresponding persistent memory device operable to store files in accordance with a file system; and a file mapping unit operable, in response to a virtual file access request from a memory management unit of a processor, the virtual file access request having a virtual address within a virtual address space associated with one of the files identifying data to be accessed, to map the virtual address to a physical address of the data within the one of the files using pre-stored mapping information and to issue a physical access request having the physical address to access the data within the one of the files. | 05-28-2015 |
20150301930 | FILE STORAGE VIA PHYSICAL BLOCK ADDRESSES - Examples disclosed herein provide systems, methods, and software for storing objects, such as files, via physical block addresses and media characteristics. In one example, a method for operating a processing system on a storage device includes identifying media characteristics for a storage media. The method further includes, for a given object received over a network and in association with a network storage request, identifying a plurality of physical block addresses for the given object based on the media characteristics. The method also includes initiating a transfer of the object from memory to the storage media. | 10-22-2015 |
20150309941 | OUT-OF-PLACE PRESETTING BASED ON INDIRECTION TABLE - An aspect of this invention is a method for providing a PreSET region in a memory device wherein the PreSET region includes one or more lines of the memory device which have been PreSET; performing a write operation on one or more out of place lines of the memory device by writing to the PreSET region instead of writing to an in place line of the memory device; and storing in an indirection table a mapping of each of a respective plurality of logical pages of the memory device to a corresponding physical page of a plurality of physical pages of the memory device, wherein the indirection table keeps track of the one or more out of place lines. | 10-29-2015 |
20150324299 | TEMPORAL STANDBY LIST - In one embodiment, a memory management system temporarily maintains a memory page at an artificially high priority level. The memory management system may assign an initial priority level to a memory page in a page priority list. The memory management system may change the memory page to a target priority level in the page priority list after a protection period has expired. | 11-12-2015 |
20150378933 | STORAGE MANAGEMENT APPARATUS, COMPUTER-READABLE RECORDING MEDIUM HAVING STORED THEREIN STORAGE MANAGEMENT PROGRAM, AND CONTROL METHOD - A storage management apparatus configured to allocate physical addresses in a physical storage area, to virtual addresses in a virtual storage area for storing data is provided. The storage management apparatus includes a processor that executes a process to define, in the physical area, a continuous area having a plurality of continuous physical addresses, and define, based on a virtual address to which a physical address in the continuous area has initially been allocated, an allocation range of virtual addresses for allocating the defined continuous area; and allocate a physical address in the defined continuous area to a virtual address in the defined relation range. | 12-31-2015 |
20160011985 | TRANSLATING BETWEEN MEMORY TRANSACTIONS OF FIRST TYPE AND MEMORY TRANSACTIONS OF A SECOND TYPE | 01-14-2016 |
20160019161 | PROGRAMMABLE ADDRESS MAPPING AND MEMORY ACCESS OPERATIONS - Programmable address mapping and memory access operations are disclosed. An example apparatus includes an address translator to translate a first host physical address to a first intermediate address. The example apparatus also includes a programmable address decoder to decode the first intermediate address to a first hardware memory address of a first addressable memory location in a memory, the programmable address decoder to receive a first command to associate the first host physical address with a second addressable memory location in the memory by changing a mapping between the first intermediate address and a second hardware memory address of the second addressable memory location. | 01-21-2016 |
20160041919 | SYSTEM AND METHOD FOR SELECTIVE SUB-PAGE DECOMPRESSION - Various embodiments of methods and systems for Selective Sub-Page Decompression (“SSPD”) seek to reduce unwanted latency in making requested data available to a processing component. To do so, SSPD embodiments may decompress a memory page in sub-page segments. Certain SSPD embodiments may decompress the sub-pages in parallel, using a plurality of available decompression engines. Certain other SSPD embodiments may decompress the sub-pages in a serial manner, using one or more available decompression engines and starting with a target sub-page that contains a requested chunk of data. In these ways, SSPD embodiments may make a requested chunk of data available to a processing component more quickly than other systems and methods known in the art. | 02-11-2016 |
20160041920 | SYSTEMS AND METHODS FOR MANAGING READ-ONLY MEMORY - Embodiments for managing read-only memory. A system includes a memory device including a real memory and a tracking mechanism configured to track relationships between multiple virtual memory addresses and real memory. The system further includes a processor configured to perform the below method and/or execute the below computer program product. One method includes mapping a first virtual memory address to a real memory in a memory device and mapping a second virtual memory address to the real memory. | 02-11-2016 |
20160062886 | METHOD, DEVICE AND SYSTEM FOR DATA PROCESSING - An example relates to a method for data processing comprising: mapping between a logical address and a physical address of a memory, wherein the memory comprises several pages, wherein a group of pages comprises at least one page that comprises at least two portions, and wherein the at least two portions of each page of the group are not part of a single-page logical address space. | 03-03-2016 |
20160070653 | Methods for Scheduling Read Commands and Apparatuses using the Same - A method for scheduling read commands, performed by a processing unit, including at least the following steps. Logical read commands are received from a master device via a first access interface, where each logical read command requests to read data of a logical address. First physical storage locations of mapping segments associated with the logical addresses are obtained from a high-level mapping table, and a second access interface is directed to read the mapping segments from the first physical storage locations of a storage unit. Second physical storage locations associated with the logical addresses are obtained from the mapping segments, and the second access interface is directed to read data from the second physical storage locations of the storage unit. The first access interface is directed to clock the data of the logical addresses out to the master device. | 03-10-2016 |
20160147668 | MEMORY ADDRESS RE-MAPPING OF GRAPHICS DATA - A method and apparatus for creating, updating, and using guest physical address (GPA) to host physical address (HPA) shadow translation tables for translating GPAs of graphics data direct memory access (DMA) requests of a computing environment implementing a virtual machine monitor to support virtual machines. The requests may be sent through a render or display path of the computing environment from one or more virtual machines, transparently with respect to the virtual machine monitor. The creating, updating, and using may be performed by a memory controller detecting entries sent to existing global and page directory tables, forking off shadow table entries from the detected entries, and translating GPAs to HPAs for the shadow table entries. | 05-26-2016 |
20160147670 | PAGE CACHE DEVICE AND METHOD FOR EFFICIENT MAPPING - Embodiments of the inventive concept can include a multi-stage mapping technique for a page cache controller. For example, a gigantic virtual page address space can be mapped to a physical page address efficiently, both in terms of time and space. An internal mapping module can implement a mapping technique for kernel virtual page address caching. In some embodiments, the mapping module can include integrated balanced skip lists and page tables for mapping sparsely populated kernel virtual page address space or spaces to physical block (i.e., page) address space or spaces. The mapping module can automatically and dynamically convert one or more sections from a skip list to a page table, or from a page table to a skip list. Thus, the kernel page cache can be extended to have larger secondary memory using volatile or non-volatile page cache storage media. | 05-26-2016 |
20160170899 | EMBEDDED DEVICE AND MEMORY MANAGEMENT METHOD THEREOF | 06-16-2016 |
20160170900 | VIRTUAL MEMORY ADDRESS RANGE REGISTER | 06-16-2016 |
20160179697 | MEMORY SYSTEM AND OPERATING METHOD THEREOF | 06-23-2016 |
20160188483 | PROCESSING PAGE FAULT EXCEPTIONS IN SUPERVISORY SOFTWARE WHEN ACCESSING STRINGS AND SIMILAR DATA STRUCTURES USING NORMAL LOAD INSTRUCTIONS - Embodiments are directed to a method of accessing a data frame, wherein a first portion of the data frame is in a first memory block, and wherein a second portion of the data frame is in a second memory block. The method includes determining that an access of the data frame crosses a boundary between the first second memory blocks, determining that an attempted translation of an address of the first portion of the data frame in the first memory block did not result in a translation fault, and accessing the first portion of the data frame. The method further includes, based at least in part on a determination that an attempted translation of an address of the second portion of the data frame in the second memory block resulted in a translation fault, accessing at least one default character as a replacement for accessing the second portion of the data frame. | 06-30-2016 |
20160188485 | PROCESSING PAGE FAULT EXCEPTIONS IN SUPERVISORY SOFTWARE WHEN ACCESSING STRINGS AND SIMILAR DATA STRUCTURES USING NORMAL LOAD INSTRUCTIONS - Embodiments are directed to a method of accessing a data frame, wherein a first portion of the data frame is in a first memory block, and wherein a second portion of the data frame is in a second memory block. The method includes determining that an access of the data frame crosses a boundary between the first second memory blocks, determining that an attempted translation of an address of the first portion of the data frame in the first memory block did not result in a translation fault, and accessing the first portion of the data frame. The method further includes, based at least in part on a determination that an attempted translation of an address of the second portion of the data frame in the second memory block resulted in a translation fault, accessing at least one default character as a replacement for accessing the second portion of the data frame. | 06-30-2016 |
20160202918 | MULTI-LEVEL PAGING AND ADDRESS TRANSLATION IN A NETWORK ENVIRONMENT | 07-14-2016 |
20160378674 | SHARED VIRTUAL ADDRESS SPACE FOR HETEROGENEOUS PROCESSORS - A processor uses the same virtual address space for heterogeneous processing units of the processor. The processor employs different sets of page tables for different types of processing units, such as a CPU and a GPU, wherein a memory management unit uses each set of page tables to translate virtual addresses of the virtual address space to corresponding physical addresses of memory modules associated with the processor. As data is migrated between memory modules, the physical addresses in the page tables can be updated to reflect the physical location of the data for each processing unit. | 12-29-2016 |
20160378675 | GENERATING DATA TABLES - The method includes identifying a first data table that includes a set of rows and a structure. The method further includes creating a second data table and a third data table having a matching structure as the first table. The method further includes distributing the set of rows of the first data table, wherein the set of rows is distributed between one or more of the second data table and the third data table based upon preset parameters. The method further includes, generating one or more operations for the set of rows. The method further includes executing one of the one or more generated operations on the second data table and the third data table. | 12-29-2016 |
20160378679 | TECHNOLOGIES FOR POSITION-INDEPENDENT PERSISTENT MEMORY POINTERS - Technologies for persistent memory pointer access include a computing device having a persistent memory including one or more nonvolatile regions. The computing device may load a persistent memory pointer having a static region identifier, a segment identifier, and an offset from the persistent memory. The computing device may map the static region identifier to a dynamic region identifier and determine a virtual memory address of the persistent memory pointer target based on the dynamic region identifier, the segment identifier, and the offset. The computing device may load an in-storage representation of a persistent-export pointer from the persistent memory, map the in-storage representation to a runtime representation, and determine a target address of a persistent external data object based on the runtime representation. The computing device may include a compiler to generate output code including persistent memory pointer and/or persistent-export pointer accesses. Other embodiments are described and claimed. | 12-29-2016 |
20180024939 | METHOD FOR EXECUTING A REQUEST TO EXCHANGE DATA BETWEEN FIRST AND SECOND DISJOINT PHYSICAL ADDRESSING SPACES OF CHIP OR CARD CIRCUIT | 01-25-2018 |
20190146929 | ADDRESS TRANSLATION PRIOR TO RECEIVING A STORAGE REFERENCE USING THE ADDRESS TO BE TRANSLATED | 05-16-2019 |
20220138112 | MEMORY EFFICIENT VIRTUAL ADDRESS MANAGEMENT FOR SYSTEM CALLS - Systems and methods for managing host virtual addresses in a system call are disclosed. In one implementation, a processing device may receive, by a supervisor managing a first application), a system call initiated by the first application, wherein a first parameter of the system call specifies a memory buffer virtual address of the first application and a second parameter of the system call specifies the memory buffer virtual address of the second application. The processing device may also translate the memory buffer virtual address of the first application to a first physical address and may translate the memory buffer virtual address of the second application to a second physical address. The processing device may further compare the first physical address to the second physical address and responsive to determining that the first physical address matches the second physical address, the processing device may execute the system call using the memory buffer virtual address of the second application. | 05-05-2022 |