Class / Patent application number | Description | Number of patent applications / Date published |
711123000 | User data cache and instruction data cache | 40 |
20090089506 | STRUCTURE FOR CACHE FUNCTION OVERLOADING - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design is provided. The design structure generally includes a system that includes a cache that stores information in a cache line for processing, wherein the cache line includes at least a first field configured to store an instruction or data and at least a second field configured to store parity information, a parity register that include a parameter indicative a whether parity generation and checking is disabled for the information in the cache line, and a processor that sets the second field in the cache line to include a value, which indicates a corresponding action to be performed, when the parameter in the parity register indicates that parity generation and checking is disabled for the cache line. | 04-02-2009 |
20100138608 | Method and apparatus for pipeline inclusion and instruction restarts in a micro-op cache of a processor - Methods and apparatus for instruction restarts and inclusion in processor micro-op caches are disclosed. Embodiments of micro-op caches have way storage fields to record the instruction-cache ways storing corresponding macroinstructions. Instruction-cache in-use indications associated with the instruction-cache lines storing the instructions are updated upon micro-op cache hits. In-use indications can be located using the recorded instruction-cache ways in micro-op cache lines. Victim-cache deallocation micro-ops are enqueued in a micro-op queue after micro-op cache miss synchronizations, responsive to evictions from the instruction-cache into a victim-cache. Inclusion logic also locates and evicts micro-op cache lines corresponding to the recorded instruction-cache ways, responsive to evictions from the instruction-cache. | 06-03-2010 |
20100293333 | MULTIPLE CACHE DIRECTORIES - A first portion of an identifier can be used to assign the identifier to a slot in a first directory. The identifier can identify a cache unit in a cache. It can be determined whether assignment of the identifier to the slot in the first directory will result in the identifier and one or more other identifiers being assigned to the same slot in the first directory. If so, then the technique can include (1) using a second portion of the identifier to assign the identifier to a slot in a second directory; and (2) assigning the one or more other identifiers to one or more slots in the second directory. In addition, it can be determined whether a directory in a cache lookup data structure includes more than one pointer. If not, then a parent pointer that points to the subject directory can be removed. | 11-18-2010 |
20100318742 | Partitioned Replacement For Cache Memory - In a particular embodiment, a circuit device includes a translation look-aside buffer (TLB) configured to receive a virtual address and to translate the virtual address to a physical address of a cache having at least two partitions. The circuit device also includes a control logic circuit adapted to identify a partition replacement policy associated with the identified one of the at least two partitions based on a partition indicator. The control logic circuit controls replacement of data within the cache according to the identified partition replacement policy in response to a cache miss event. | 12-16-2010 |
20110125968 | ARCHITECTURE AND METHOD FOR CACHE-BASED CHECKPOINTING AND ROLLBACK - A cache system to compare memory transactions while facilitating checkpointing and rollback is provided. The system includes at least one processor core including at least one cache operating in write-through mode, at least two checkpoint caches operating in write-back mode, a comparison/checkpoint logic, and a main memory. The at least two checkpoint caches are communicatively coupled to the at least one cache operating in write-through mode. The comparison/checkpoint logic is communicatively coupled to the at least two checkpoint caches. The comparison/checkpoint logic compares memory transactions stored in the at least two checkpoint caches responsive to an initiation of a checkpointing. The main memory is communicatively coupled to at least one of the at least two checkpoint caches. | 05-26-2011 |
20110125969 | Cache memory control device, semiconductor integrated circuit, and cache memory control method - A cache memory control device includes cache memories shared by arithmetic processing units, buses shared by the arithmetic processing units to transfer data, an instruction execution unit that accesses the cache memories to execute an access instruction from the arithmetic processing unit, and transfers data from the cache memory to the bus, an instruction feeding unit that feeds the access instruction to the instruction execution unit while inhibiting feeding of a subsequent access instruction for the cache memory accessed in the preceding access instruction in an execution period of the preceding access instruction and inhibiting feeding of a subsequent access instruction using the same bus as the preceding access instruction in a predetermined period, and a timing control unit that, depending on the type of the subsequent access instruction, controls the instruction executing unit to delay the transfer of the data from the cache memory to the bus. | 05-26-2011 |
20110138125 | EVENT TRACKING HARDWARE - An event tracking hardware engine having N (≧2) caches is invoked when an event of interest occurs, using a corresponding key. The engine stores, for each of the different kinds of events, a corresponding cumulative number of occurrences, by carrying out additional steps. In some instances, the additional steps include searching in the N caches for an entry for the key; if an entry for the key is found, and no overflow of the corresponding cumulative number of occurrences for the entry for the key would occur by incrementing the corresponding cumulative number of occurrences, incrementing; if the entry for the key is found, and overflow would occur, promoting the entry to a next highest cache; and if the entry for the key is not found, entering the entry for the key in a zeroth one of the caches with the corresponding cumulative number of occurrences being initialized. In other instances, the additional steps include searching in a zeroth one of the caches for an entry for the key; if an entry for the key is found in the zeroth one of the caches, and no overflow of the corresponding cumulative number of occurrences for the entry for the key would occur by incrementing the corresponding cumulative number of occurrences, incrementing; if the entry for the key is found in the zeroth one of the caches, and overflow would occur, promoting the entry from the zeroth one of the caches in which the entry exists to a next highest cache; and if the entry for the key is not found, entering the entry for the key in the zeroth one of the caches with the corresponding cumulative number of occurrences being initialized. The engine includes a plurality of caches and a corresponding plurality of control circuits. | 06-09-2011 |
20120272005 | SAVING LOG DATA USING A DISK SYSTEM AS PRIMARY CACHE AND A TAPE LIBRARY AS SECONDARY CACHE - Various embodiments save a plurality of log data in a hierarchical storage management system using a disk system as a primary cache with a tape library as a secondary cache. The user data is stored in the primary cache and written into the secondary cache at a subsequent period of time. The plurality of blank tapes in the secondary cache is prepared for storing the user data and the plurality of log data based on priorities. At least one of the plurality of blank tapes is selected for copying the plurality of log data and the user data from the primary cache to the secondary cache based on priorities. The plurality of log data is stored in the primary cache. The selection of at least one of the plurality of blank tapes completely filled with the plurality of log data is delayed for writing additional amounts of user data. | 10-25-2012 |
20130024619 | MULTILEVEL CONVERSION TABLE CACHE FOR TRANSLATING GUEST INSTRUCTIONS TO NATIVE INSTRUCTIONS - A method for translating instructions for a processor. The method includes accessing a guest instruction and performing a first level translation of the guest instruction using a first level conversion table. The method further includes outputting a resulting native instruction when the first level translation proceeds to completion. A second level translation of the guest instruction is performed using a second level conversion table when the first level translation does not proceed to completion, wherein the second level translation further processes the guest instruction based upon a partial translation from the first level conversion table. The resulting native instruction is output when the second level translation proceeds to completion. | 01-24-2013 |
20130046937 | TRANSACTIONAL MEMORY SYSTEM WITH EFFICIENT CACHE SUPPORT - A computer implemented method for use by a transaction program for managing memory access to a shared memory location for transaction data of a first thread, the shared memory location being accessible by the first thread and a second thread. A string of instructions to complete a transaction of the first thread are executed, beginning with one instruction of the string of instructions. It is determined whether the one instruction is part of an active atomic instruction group (AIG) of instructions associated with the transaction of the first thread. A cache structure and a transaction table which together provide for entries in an active mode for the AIG are located if the one instruction is part of an active AIG. The next instruction is executed under a normal execution mode in response to determining that the one instruction is not part of an active AIG. | 02-21-2013 |
20130191597 | CACHE DEVICE, COMMUNICATION APPARATUS, AND COMPUTER PROGRAM PRODUCT - A cache device may include a first storage unit configured to store first cache data, a second storage unit configured to store a cache file that stores copy of the first cache data as second cache data; a reading unit configured to select and read out one of the first cache data, which has been stored in the first storage unit, and the second cache data, which has been stored in the cache file stored in the second storage unit, in response to a reference request from outside, and an instructing unit configured to determine probability of expecting future referencing request based on the frequency of past referencing requests, the instructing unit being configured to instruct that either the first cache data or the second cache data is to be selected and read out based on the probability. | 07-25-2013 |
20130198458 | TRANSITIONING FROM SOURCE INSTRUCTION SET ARCHITECTURE (ISA) CODE TO TRANSLATED CODE IN A PARTIAL EMULATION ENVIRONMENT - In one embodiment, a processor can operate in multiple modes, including a direct execution mode and an emulation execution mode. More specifically, the processor may operate in a partial emulation model in which source instruction set architecture (ISA) instructions are directly handled in the direct execution mode and translated code generated by an emulation engine is handled in the emulation execution mode. Embodiments may also provide for efficient transitions between the modes using information that can be stored in one or more storages of the processor and elsewhere in a system. Other embodiments are described and claimed. | 08-01-2013 |
20130219123 | MULTI-CORE PROCESSOR SHARING L1 CACHE - A multi-core processor comprises a level 1 (L1) cache and two independent processor cores each sharing the L1 cache. | 08-22-2013 |
20130254487 | METHOD FOR ACCESSING MIRRORED SHARED MEMORIES AND STORAGE SUBSYSTEM USING METHOD FOR ACCESSING MIRRORED SHARED MEMORIES - According to a prior art storage subsystem, shared memories are mirrored in main memories of two processors providing redundancy. When the consistency of writing order of data is not ensured among mirrored shared memories, the processors must read only one of the mirrored shared memories to have the write order of the read data correspond among the two processors. As a result, upon reading data from the shared memories, it is necessary for a processor to read data from the main memory of the other processor, so that the overhead is increased compared to the case where the respective processors read their respective main memories. According to the storage subsystem of the present invention, a packet redirector having applied a non-transparent bridge enables to adopt a PCI Express multicast to the writing of data from the processor to the main memory, so that the order of writing data into the shared memories can be made consistent among the mirrored memories. As a result, data can be read from the shared memories speedily by accessing respective main memories in the respective processors. | 09-26-2013 |
20130339610 | CACHE LINE HISTORY TRACKING USING AN INSTRUCTION ADDRESS REGISTER FILE - Embodiments relate to tracking cache lines. An aspect of embodiments includes performing an operation by a processor. Another aspect of embodiments includes fetching a cache line based on the operation. Yet another aspect of embodiments includes storing in an instruction address register file at least one of (i) an operation identifier identifying the operation and (ii) a memory location identifier identifying a level of memory from which the cache line is populated. | 12-19-2013 |
20140047185 | System and Method for Data Redundancy Within a Cache - In one embodiment, a computing system includes a cache and a cache manager. The cache manager is able to receive data, write the data to a first portion of the cache, write the data to a second portion of the cache, and delete the data from the second portion of the cache when the data in the first portion of the cache is flushed. | 02-13-2014 |
20140082288 | SYSTEM AND METHOD FOR OPERATING A SYSTEM TO CACHE A NETWORKED FILE SYSTEM - A network attached storage (NAS) caching appliance, system, and associated method of operation for caching a networked file system. Still further, some embodiments provide for a cache system that implements a mufti-tiered, policy-influenced block replacement algorithm. | 03-20-2014 |
20140095793 | STORAGE SYSTEM - An object of the present invention is to provide a storage system which is shared by a plurality of application programs, wherein optimum performance tuning for a cache memory can be performed for each of the individual application programs. The storage system of the present invention comprises a storage device which provides a plurality of logical volumes which can be accessed from a plurality of application programs, a controller for controlling input and output of data to and from the logical volumes in response to input/output requests from the plurality of application programs, and a cache memory for temporarily storing data input to and output from the logical volume, wherein the cache memory is logically divided into a plurality of partitions which are exclusively assigned to the plurality of logical volumes respectively. | 04-03-2014 |
20140156933 | System, Method, and Apparatus for Improving Throughput of Consecutive Transactional Memory Regions - Systems, apparatuses, and methods for improving TM throughput using a TM region indicator (or color) are described. Through the use of TM region indicators younger TM regions can have their instructions retired while waiting for older TM regions to commit. | 06-05-2014 |
20140156934 | STORAGE APPARATUS AND MODULE-TO-MODULE DATA TRANSFER METHOD - A storage apparatus includes controller modules configured to have a cache memory and to control a storage device, respectively, and communication channels that connect the controller modules in a mesh topology, where one controller module providing an instruction to perform data transfer in which the controller module is specified as a transfer source and another controller module is specified as a transfer destination. The instruction is provided to a controller module directly connected to the other controller modules using a corresponding one of the communication channels, and configured to perform data transfer from the cache memory of the one controller module to the cache memory of the other controller module, in accordance with the instruction. | 06-05-2014 |
20140281244 | STORAGE APPARATUS AND CONTROL METHOD FOR STORAGE APPARATUS - An exemplary storage apparatus of the invention includes storage devices for storing data of block I/O commands and file I/O commands and a controller including a block cache area and a file cache area. The controller creates block I/O commands from file I/O commands and accesses the storage devices in accordance with the created block I/O commands. In a case where the file cache area is lacking an area to cache first data of a received first file I/O command, the controller chooses one of a first cache method that newly creates a free area in the file cache area to cache the first data in the file cache area and a second cache method that caches the first data in the block cache area without caching the first data in the file cache area. | 09-18-2014 |
20140281245 | VIRTUAL UNIFIED INSTRUCTION AND DATA CACHES - Execution of a store instruction to modify an instruction at a memory location identified by a memory address is requested. A cache controller stores the memory address and the modified data in an associative memory coupled to a data cache and an instruction cache. In addition, the modified data is stored in a second level cache without invalidating the memory location associated with the instruction cache. | 09-18-2014 |
20140289471 | COHERENCE DE-COUPLING BUFFER - A coherence decoupling buffer. In accordance with a first embodiment, a coherence decoupling buffer is for storing tag information of cache lines evicted from a plurality of cache memories. A coherence decoupling buffer may be free of value information of the plurality of cache memories. A coherence decoupling buffer may also be combined with a coherence memory. | 09-25-2014 |
20140304474 | Conditional Notification Mechanism - The described embodiments comprise a computing device with a first processor core and a second processor core. In some embodiments, during operations, the first processor core receives, from the second processor core, an indication of a memory location and a flag. The first processor core then stores the flag in a first cache line in a cache in the first processor core and stores the indication of the memory location separately in a second cache line in the cache. Upon encountering a predetermined result when evaluating a condition for the indicated memory location, the first processor core updates the flag in the first cache line. Based on the update of the flag, the first processor core causes the second processor core to perform an operation. | 10-09-2014 |
20140310469 | COHERENCE PROCESSING WITH PRE-KILL MECHANISM TO AVOID DUPLICATED TRANSACTION IDENTIFIERS - An apparatus for processing coherency transactions in a computing system is disclosed. The apparatus may include a request queue circuit, a duplicate tag circuit, and a memory interface unit. The request queue circuit may be configured to generate a speculative read request dependent upon a received read transaction. The duplicate tag circuit may be configured to store copies of tag from one or more cache memories, and to generate a kill message in response to a determination that data requested in the received read transaction is stored in a cache memory. The memory interface unit may be configured to store the generated speculative read request dependent upon a stall condition. The stored speculative read request may be sent to a memory controller dependent upon the stall condition. The memory interface unit may be further configured to delete the speculative read request in response to the kill message. | 10-16-2014 |
20140337581 | POINTER CHASING PREDICTION - A system and method for efficient scheduling of dependent load instructions. A processor includes both an execution core and a scheduler that issues instructions to the execution core. The execution core includes a load-store unit (LSU). The scheduler determines a first condition is satisfied, wherein the first condition comprises result data for a first load instruction is predicted eligible for LSU-internal forwarding. The scheduler determines a second condition is satisfied, wherein the second condition comprises a second load instruction younger in program order than the first load instruction is dependent on the first load instruction. In response to each of the first condition and the second condition being satisfied, the scheduler can issue the second load instruction earlier than it otherwise would. The LSU internally forwards the received result data from the first load instruction to address generation logic for the second load instruction. | 11-13-2014 |
20150081975 | Split-Word Memory - Method, process, and apparatus to efficiently store, read, and/or process syllables of word data. A portion of a data word, which includes multiple syllables, may be read by a computer processor. The processor may read a first syllable of the data word from a first memory. The processor may read a second syllable of the data word from a second portion of memory. The second syllable may include bits which are less critical than the bits of the first syllable. The second memory may be distinct from the first memory based on one or more physical attributes. | 03-19-2015 |
20150089140 | Movement Offload To Storage Systems - In a write by-peer-reference, a storage device client writes a data block to a target storage device in the storage system by sending a write request to the target storage device, the write request specifying information used to obtain the data block from a source storage device in the storage system. The target storage device sends a read request to the source storage device for the data block. The source storage device sends the data block to the target storage device, which then writes the data block to the target storage device. The data block is thus written to the target storage device without the storage device client transmitting the data block itself to the target storage device. | 03-26-2015 |
20150100733 | Efficient Memory Organization - A computer system and method is disclosed for efficient cache memory organization. One embodiment of the disclosed system include dividing the tag memory into physically separated memory arrays with the entries of each array referencing cache lines in such a way that no two cache lines, which are consecutively aligned in data cache memory, reside in the same array. In another embodiment, the entries of the two memory arrays reference consecutively aligned cache lines in an alternating manner. | 04-09-2015 |
20150127911 | BOUNDED CACHE SEARCHES - Cache lines of a data cache may be assigned to a specific page type or color. In addition, the computing system may monitor when a cache line assigned to the specific page color is allocated in the cache. As each cache line assigned to a particular page color is allocated, the computing system may compare a respective index associated with each of the cache lines to determine maximum and minimum indices for that page color. These indices define a block of the cache that stores the data assigned to the page color. Thus, when the data of a page color is evicted from the cache, instead of searching the entire cache to locate the cache lines, the computing system uses the maximum and minimum indices as upper and lower bounds to reduce the portion of the cache that is searched. | 05-07-2015 |
20150149723 | HIGH-PERFORMANCE INSTRUCTION CACHE SYSTEM AND METHOD - A method is provided for facilitating operation of a processor core coupled to a first memory containing executable instructions, a second memory faster than the first memory and a third memory faster than the second memory. The method includes examining instructions being filled from the second memory to the third memory, extracting instruction information containing at least branch information; creating a plurality of tracks based on the extracted instruction information; filling at least one or more instructions that possibly be executed by the processor core based on one or more tracks from a plurality of instruction tracks from the first memory to the second memory; filling at least one or more instructions based on one or more tracks from the plurality of tracks from the second memory to the third memory before the processor core executes the instructions, such that the processor core fetches the instructions from the third memory. | 05-28-2015 |
20150149724 | ARITHMETIC PROCESSING DEVICE, ARITHMETIC PROCESSING SYSTEM, AND METHOD FOR CONTROLLING ARITHMETIC PROCESSING DEVICE - An arithmetic processing device includes: a arithmetic cores, wherein the arithmetic core comprises: an instruction controller configured to request processing corresponding to an instruction; a memory configured to store lock information indicating that a locking target address is locked, the locking target address, and priority information of the instruction; and a cache controller configured to, when storing data of a first address in a cache memory to execute a first instruction including locking of the first address from the instruction controller, suppress updating of the memory if the lock information is stored in the memory and a priority of the priority information is higher than a first priority of the first instruction. | 05-28-2015 |
20150355978 | SYSTEMS AND METHODS FOR BACKING UP STORAGE VOLUMES IN A STORAGE SYSTEM - Systems and methods for backing up storage volumes are provided. One system includes a primary side, a secondary side, and a network coupling the primary and secondary sides. The secondary side includes first and second VTS including a cache and storage tape. The first VTS is configured to store a first portion of a group of storage volumes in its cache and migrate the remaining portion to its storage tape. The second VTS is configured to store the remaining portion of the storage volumes in its cache and migrate the first portion to its storage tape. One method includes receiving multiple storage volumes from a primary side, storing the storage volumes in the cache of the first and second VTS, and migrating a portion of the storage volumes from the cache to storage tape in the first VTS. | 12-10-2015 |
20160004558 | ALERTING HARDWARE TRANSACTIONS THAT ARE ABOUT TO RUN OUT OF SPACE - A transactional memory system determines whether to pass control of a transaction to an about-to-run-out-of-resource handler. A processor of the transactional memory system determines information about an about-to-run-out-of-resource handler for transaction execution of a code region of a hardware transaction. The processor dynamically monitors an amount of available resource for the currently running code region of the hardware transaction. The processor detects that the amount of available resource for transactional execution of the hardware transaction is below a predetermined threshold level. The processor, based on the detecting, saves speculative state information of the hardware transaction, and executes the about-to-run-out-of-resource handler, the about-to-run-out-of-resource handler determining whether the hardware transaction is to be aborted or salvaged. | 01-07-2016 |
20160018995 | RAID SYSTEM FOR PROCESSING I/O REQUESTS UTILIZING XOR COMMANDS - A controller for maintaining data consistency without utilizing region lock is disclosed. The controller is connected to multiple physical disk drives, and the physical disk drives include a data portion and a parity data portion that corresponds to the data portion. The controller can receive a first input/output command (I/O) from a first computing device for writing write data to the data portion and a second I/O command from a second computing device for accessing data from the data portion. The controller allocates a first buffer for storing data associated with the first I/O command and allocates a second buffer for storing data associated with a logical operation. The controller initiates a logical operation that comprises an exclusive OR operation directed to the write data and the read data to obtain resultant exclusive OR data and copies the write data to the data portion. | 01-21-2016 |
20160140042 | INSTRUCTION CACHE TRANSLATION MANAGEMENT - Managing an instruction cache of a processing element, the instruction cache including a plurality of instruction cache entries, each entry including a mapping of a virtual memory address to one or more processor instructions, includes: issuing, at the processing element, a translation lookaside buffer invalidation instruction for invalidating a translation lookaside buffer entry in a translation lookaside buffer, the translation lookaside buffer entry including a mapping from a range of virtual memory addresses to a range of physical memory addresses; causing invalidation of one or more of the instruction cache entries of the plurality of instruction cache entries in response to the translation lookaside buffer invalidation instruction. | 05-19-2016 |
20160140043 | INSTRUCTION ORDERING FOR IN-PROGRESS OPERATIONS - Execution of the memory instructions is managed using memory management circuitry including a first cache that stores a plurality of the mappings in the page table, and a second cache that stores entries based on virtual addresses. The memory management circuitry executes operations from the one or more modules, including, in response to a first operation that invalidates at least a first virtual address, selectively ordering each of a plurality of in progress operations that were in progress when the first operation was received by the memory management circuitry, wherein a position in the ordering of a particular in progress operation depends on either or both of: (1) which of one or more modules initiated the particular in progress operation, or (2) whether or not the particular in progress operation provides results to the first cache or second cache. | 05-19-2016 |
20160170885 | CACHED VOLUMES AT STORAGE GATEWAYS | 06-16-2016 |
20160188479 | INSTRUCTION AND LOGIC TO TEST TRANSACTIONAL EXECUTION STATUS - Novel instructions, logic, methods and apparatus are disclosed to test transactional execution status. Embodiments include decoding a first instruction to start a transactional region. Responsive to the first instruction, a checkpoint for a set of architecture state registers is generated and memory accesses from a processing element in the transactional region associated with the first instruction are tracked. A second instruction to detect transactional execution of the transactional region is then decoded. An operation is executed, responsive to decoding the second instruction, to determine if an execution context of the second instruction is within the transactional region. Then responsive to the second instruction, a first flag is updated. In some embodiments, a register may optionally be updated and/or a second flag may optionally be updated responsive to the second instruction. | 06-30-2016 |
20160203148 | MANAGING DATA IN A CACHE MEMORY OF A QUESTION ANSWERING SYSTEM | 07-14-2016 |