Class / Patent application number | Description | Number of patent applications / Date published |
711127000 | Interleaved | 12 |
20080301370 | Memory Module - A memory module includes a module circuit board, an amplifier circuit disposed on the module circuit board for amplifying an input signal, and a memory component to store a data item, wherein the memory component is disposed on the module circuit board. The amplifier circuit includes an input to receive a data signal and an output to provide an amplified data signal. The memory component comprises an input to receive the amplified data signal, wherein the data item is stored in the memory component in dependence on a level of the received amplified data signal. | 12-04-2008 |
20090172286 | Method And System For Balancing Host Write Operations And Cache Flushing - A method and system for balancing host write operations and cache flushing is disclosed. The method may include steps of determining an available capacity in a cache storage portion of a self-caching storage device, determining a ratio of cache flushing steps to host write commands if the available capacity is below a desired threshold and interleaving cache flushing steps with host write commands to achieve the ratio. The cache flushing steps may be executed by maintaining a storage device busy status after executing a host write command and utilizing this additional time to copy a portion of the data from the cache storage into the main storage. The system may include a cache storage, a main storage and a controller configured to determine and execute a ratio of cache flushing steps to host write commands by executing cache flushing steps while maintaining a busy status after a host write command. | 07-02-2009 |
20100037024 | MEMORY INTERLEAVE FOR HETEROGENEOUS COMPUTING - A memory interleave system for providing memory interleave for a heterogeneous computing system is provided. The memory interleave system effectively interleaves memory that is accessed by heterogeneous compute elements in different ways, such as via cache-block accesses by certain compute elements and via non-cache-block accesses by certain other compute elements. The heterogeneous computing system may comprise one or more cache-block oriented compute elements and one or more non-cache-block oriented compute elements that share access to a common main memory. The cache-block oriented compute elements access the memory via cache-block accesses (e.g., 64 bytes, per access), while the non-cache-block oriented compute elements access memory via sub-cache-block accesses (e.g., 8 bytes, per access). A memory interleave system is provided to optimize the interleaving across the system's memory banks to minimize hot spots resulting from the cache-block oriented and non-cache-block oriented accesses of the heterogeneous computing system. | 02-11-2010 |
20110320725 | DYNAMIC MODE TRANSITIONS FOR CACHE INSTRUCTIONS - A method of providing requests to a cache pipeline includes receiving a plurality of requests from one or more state machines at an arbiter, selecting one of the plurality of requests as a selected request, the selected request having been provided by a first state machine, determining that the selected request includes a mode that requires a first step and a second step, the first step including an access to a location in a cache, determining that the location in the cache is unavailable, and replacing the mode with a modified mode that only includes the second step. | 12-29-2011 |
20120324168 | PROTECTION AGAINST ACCESS VIOLATION DURING THE EXECUTION OF AN OPERATING SEQUENCE IN A PORTABLE DATA CARRIER - A method for protecting an operation sequence executed by a portable data carrier from spying out, wherein the data carrier has at least a processor core, a main memory and a cache memory with a plurality of cache lines. The processor core is able to access, upon executing the operation sequence, at least two data values, with the data values occupying at least one cache line in the cache memory and being respectively divided into several portions so that the occurrence of a cache miss or a cache hit is independent of which data value is accessed. A computer program product and a device have corresponding features. The invention serves to thwart attacks based on an evaluation of the cache accesses during the execution of the operation sequence. | 12-20-2012 |
20130031309 | SEGMENTED CACHE MEMORY - A cache memory associated with a main memory and a processor capable of executing a dataflow processing task, includes a plurality of disjoint storage segments, each associated with a distinct data category. A first segment is dedicated to input data originating from a dataflow consumed by the processing task. A second segment is dedicated to output data originating from a dataflow produced by the processing task. A third segment is dedicated to global constants, corresponding to data available in a single memory location to multiple instances of the processing task. | 01-31-2013 |
20130145096 | GENERATING AN ORDERED SEQUENCE IN A DATABASE SYSTEM USING MULTIPLE INTERLEAVED CACHES - A method, system, and computer program product is disclosed for generating an ordered sequence from a predetermined sequence of symbols using protected interleaved caches, such as semaphore protected interleaved caches. The approach commences by dividing the predetermined sequence of symbols into two or more interleaved caches, then mapping each of the two or more interleaved caches to a particular semaphore of a group of semaphores. The group of semaphores is organized into bytes or machine words for storing the group of semaphores into a shared memory, the shared memory accessible by a plurality of session processes. Protected (serialized) access by the session processes is provided by granting access to one of the two or more interleaved caches only after one of the plurality of session processes performs a semaphore altering read-modify-write operation (e.g., a CAS) on the particular semaphore. The interleaved caches are assigned values successively from the predetermined sequence using a round-robin assignment technique. | 06-06-2013 |
20130145097 | Selective Access of a Store Buffer Based on Cache State - An apparatus includes a cache memory that includes a state array configured to store state information. The state information includes a state that indicates updated corresponding to a particular address of the cache memory is not stored in the cache memory but is available from at least one of multiple sources external to the cache memory, where at least one of the multiple sources is a store buffer. | 06-06-2013 |
20130232304 | ACCELERATED INTERLEAVED MEMORY DATA TRANSFERS IN MICROPROCESSOR-BASED SYSTEMS, AND RELATED DEVICES, METHODS, AND COMPUTER-READABLE MEDIA - Accelerated interleaved memory data transfers in microprocessor-based systems and related devices, methods, and computer-readable media are disclosed. Embodiments disclosed in the detailed description include accelerated interleaved memory data transfers in processor-based systems. Related devices, methods, and computer-readable media are also disclosed. Embodiments disclosed include accelerated large and small memory data transfers. As a non-limiting example, a large data transfer is a data transfer size greater than the interleaved address block size provided in the interleaved memory. As another non-limiting example, a small data transfer is a data transfer size less than the interleaved address block size provided in the interleaved memory. | 09-05-2013 |
20130339612 | APPARATUS AND METHOD FOR TESTING A CACHE MEMORY - An apparatus generates test data for a cache memory that caches data in a cache line in accordance with a memory address. The apparatus generates a memory address to be accessed, data to be arranged in a storage area designated by the memory address, an access instruction for the memory address, and an expected value of the data that is to be cached in the cache memory when memory access is performed in accordance with the access instruction. The apparatus generates an address list including the first memory address, the access instruction, and the expected value, so that the address list is stored in a cache block that is cacheable in the cache memory by one memory access. The apparatus generates test data in which the address list and the data are arranged, so that the address list is cached in a different cache line from the data. | 12-19-2013 |
20140129775 | CACHE PREFETCH FOR NFA INSTRUCTIONS - Disclosed is a method of pre-fetching NFA instructions to an NFA cell array. The method and system fetch instructions for use in an L1 cache during NFA instruction execution. Successive instructions from a current active state are fetched and loaded in the L1 cache. Disclosed is a system comprising an external memory, a cache line fetcher, and an L1 cache where the L1 cache is accessible and searchable by an NFA cell array and where successive instructions from a current active state in the NFA are fetched from external memory in an atomic cache line manner into a plurality of banks in the L1 cache. | 05-08-2014 |
20160154741 | Data Processing Circuit and Method for De-Interleaving Process in DVB-T2 System | 06-02-2016 |