Entries |
Document | Title | Date |
20080222357 | Low power computer with main and auxiliary processors - A processing device comprises a processor, low power nonvolatile memory that communicates with the processor, high power nonvolatile memory that communicates with the processor. The processing device manages data using a cache hierarchy comprising a high power (HP) nonvolatile memory level for data in the high power nonvolatile memory and a low power (LP) nonvolatile memory level for data in the low power nonvolatile memory. The LP nonvolatile memory level has a higher level in the cache hierarchy than the HP nonvolatile memory level. | 09-11-2008 |
20080229017 | Systems and Methods of Providing Security and Reliability to Proxy Caches - The present solution provides a variety of techniques for accelerating and optimizing network traffic, such as HTTP based network traffic. The solution described herein provides techniques in the areas of proxy caching, protocol acceleration, domain name resolution acceleration as well as compression improvements. In some cases, the present solution provides various prefetching and/or prefreshening techniques to improve intermediary or proxy caching, such as HTTP proxy caching. In other cases, the present solution provides techniques for accelerating a protocol by improving the efficiency of obtaining and servicing data from an originating server to server to clients. In another cases, the present solution accelerates domain name resolution more quickly. As every HTTP access starts with a URL that includes a hostname that must be resolved via domain name resolution into an IP address, the present solution helps accelerate HTTP access. In some cases, the present solution improves compression techniques by prefetching non-cacheable and cacheable content to use for compressing network traffic, such as HTTP. The acceleration and optimization techniques described herein may be deployed on the client as a client agent or as part of a browser, as well as on any type and form of intermediary device, such as an appliance, proxying device or any type of interception caching and/or proxying device. | 09-18-2008 |
20080229018 | Save data discrimination method, save data discrimination apparatus, and a computer-readable medium storing save a data discrimination program - A save data discrimination method saves calculation results including an element which is periodically saved when a computer executes a program repeating the same arithmetic process. The method includes analyzing a loop structure of the program from a source code of the program to detect a main loop of the arithmetic process repeated in the program and a sub-loop included in the main loop, determining a point of entrance to the main loop as a checkpoint that is a point for saving data of the calculation results, and analyzing the contents of the arithmetic process described in the main loop to identify reference-first elements which are elements only referred to and elements defined after being referred to as data to be saved at the checkpoint determined at the point of entrance. | 09-18-2008 |
20080229019 | METHOD AND SYSTEM FOR EFFICIENT FRAGMENT CACHING - Methods for serving data include maintaining an incomplete version of an object at a server and at least one fragment at the server. In response to a request for the object from a client, the incomplete version of the object, an identifier for a fragment comprising a portion of the objects and a position for the fragment within the object are sent to the client. After receiving the incomplete version of the object, the identifier, and the position, the client requests the fragment from the server using the identifier. The object is constructed by including the fragment in the incomplete version of the object in a location specified by the position. | 09-18-2008 |
20080235450 | Updating Entries Cached by a Network Processor - Machine-readable media, methods, and apparatus are described to update network processor cache entries in corresponding local memories and update cached entries based upon information stored in corresponding buffers for the microengines. A control plane of the network processor identifies each microengine having updated entry stored in corresponding local memory, and store information in the corresponding buffer for each identified microengine to indicate that the entry has been updated in the external memory. | 09-25-2008 |
20080235451 | NON-VOLATILE MEMORY DEVICE AND ASSOCIATED PROGRAMMING METHOD - A non-volatile memory device having a memory array is configured to prevent power voltage noise generation during programming, thereby improving reliability. An associated programming method of the non-volatile memory device includes storing data input from an external source to a cache register. The stored data is moved to a main register. The cache register is cleared and the data stored in the main register is programmed to the memory cell array. | 09-25-2008 |
20080256295 | Method of Increasing Boot-Up Speed - There is provided a method of increasing boot-up speed in a computer system ( | 10-16-2008 |
20080256296 | INFORMATION PROCESSING APPARATUS AND METHOD FOR CACHING DATA - A processor is provided with a register and operates to: determine whether a first tag address match with a second tag address, the first tag address being derived from a target main memory address that is to be accessed for obtaining target data subjected to a computation, the second tag address being one of the tag addresses stored in the local memory; start copying data stored in at least one of the cache lines assigned with a line number that matches with a target line number that is derived from the target main memory address into the register before completing the determination of match between the first tag address and the second tag address; and access the register to obtain the data copied from the local memory when determined that the first tag address match with the second tag address. | 10-16-2008 |
20080263278 | CACHE RECONFIGURATION BASED ON RUN-TIME PERFORMANCE DATA OR SOFTWARE HINT - A method for reconfiguring a cache memory is provided. The method in one aspect may include analyzing one or more characteristics of an execution entity accessing a cache memory and reconfiguring the cache based on the one or more characteristics analyzed. Examples of analyzed characteristic may include but are not limited to data structure used by the execution entity, expected reference pattern of the execution entity, type of an execution entity, heat and power consumption of an execution entity, etc. Examples of cache attributes that may be reconfigured may include but are not limited to associativity of the cache memory, amount of the cache memory available to store data, coherence granularity of the cache memory, line size of the cache memory, etc. | 10-23-2008 |
20080270700 | DYNAMIC, ON-DEMAND STORAGE AREA NETWORK (SAN) CACHE - Disclosed are apparatus and methods for facilitating caching in a storage area network (SAN). In general, data transfer traffic between one or more hosts and one or more memory portions in one or more storage device(s) is redirected to one or more cache modules. One or more network devices (e.g., switches) of the SAN can be configured to redirect data transfer for a particular memory portion of one or more storage device(s) to a particular cache module. As needed, data transfer traffic for any number of memory portions and storage devices can be identified for or removed from being redirected to a particular cache module. Also, any number of cache modules can be utilized for receiving redirected traffic so that such redirected traffic is divided among such cache modules in any suitable proportion for enhanced flexibility. | 10-30-2008 |
20080294846 | DYNAMIC OPTIMIZATION OF CACHE MEMORY - The present invention includes dynamically analyzing look-up requests from a cache look-up algorithm to look-up data block tags corresponding to blocks of data previously inserted into a cache memory, to determine a cache related parameter. After analysis of a specific look-up request, a block of data corresponding to the tag looked up by the look-up request may be accessed from the cache memory or from a mass storage device. | 11-27-2008 |
20080301367 | WAVEFORM CACHING FOR DATA DEMODULATION AND INTERFERENCE CANCELLATION AT A NODE B - The present patent application discloses a method and apparatus for using external and internal memory for cancelling traffic interference comprising storing data in an external memory; and processing the data samples on an internal memory, wherein the external memory is low bandwidth memory; and the internal memory is high bandwidth on board cache. The present method and apparatus also comprises caching portions of the data on the internal memory, filling the internal memory by reading the newest data from the external memory and updating the internal memory; and writing the older data back to the external memory from the internal memory, wherein the data is incoming data samples. | 12-04-2008 |
20080307162 | PRELOAD CONTROLLER, PRELOAD CONTROL METHOD FOR CONTROLLING PRELOAD OF DATA BY PROCESSOR TO TEMPORARY MEMORY, AND PROGRAM - A preload controller for controlling a bus access device that reads out data from a main memory via a bus and transfers the readout data to a temporary memory, including a first acquiring device to acquire access hint information which represents a data access interval to the main memory, a second acquiring device to acquire system information which represents a transfer delay time in transfer of data via the bus by the bus access device, a determining device to determine a preload unit count based on the data access interval represented by the access hint information and the transfer delay time represented by the system information, and a management device to instruct the bus access device to read out data for the preload unit count from the main memory and to transfer the readout data to the temporary memory ahead of a data access of the data. | 12-11-2008 |
20080313402 | VIRTUAL PERSONAL VIDEO RECORDER - The claimed subject matter provides a system and/or method that manages media content. The disclosed system includes a component that synchronizes with a multimedia player that is in communication with the component. The component upon synchronization automatically determines an amount of storage space available on the handheld device and based at least in part on this available space, the component substitutes a first media presentation persisted on the storage space with a second media presentation retrieved from a media storage farm. | 12-18-2008 |
20080320222 | Adaptive caching in broadcast networks - Adaptive caching techniques are described. In an implementation, a head end defines a plurality of cache periods having associated criteria. Request data for content is obtained and utilized to associate the content with the defined cache periods based on a comparison of the request data with the associated criteria. Then, the content is cached at the head end for the associated cache period. | 12-25-2008 |
20080320223 | Cache controller and cache control method - A cache controller that writes data to a cache memory, includes a first buffer unit that retains data flowing in from outside to be written to the cache memory, a second buffer unit that retains a data piece to be currently written to the cache memory, among pieces of the data retained in the first buffer unit, and a write controlling unit that controls writing of the data piece retained in the second buffer unit to the cache memory. | 12-25-2008 |
20090024794 | Enhanced Access To Data Available In A Cache - Enhanced access data available in a cache. In one embodiment, a cache maintaining copies of source data is formed as a volatile memory. On receiving a request directed to the cache for a copy of a data element, the requested copy maintained in the cache is sent as a response to the request. In another embodiment used in the context of applications accessing databases in a navigational model, a cache maintains rows of data accessed by different user applications on corresponding connections. Applications may send requests directed to the cache to retrieve copies of the rows, populated potentially by other applications, while the cache restricts access to rows populated by other applications when processing requests directed to the source database system. In another embodiment, an application may direct requests to retrieve data elements caused to be populated by activity on different connections established by the same application. | 01-22-2009 |
20090024795 | METHOD AND APPARATUS FOR CACHING DATA - A relay unit inputs data and an index. A cache management unit determines whether or not a space area to cache data exists. In the case where there is a space area, the cache management unit caches data. An identifier generating unit generates an identifier corresponding to contents of the cached data. The identifier is registered in a cache data table in association with the data. The identifier is registered in a cache index table in association with the index. In the case where there is no space area, the cache management unit secures a space area. The cache management unit unregisters an identifier associated with the data which was cached in the secured area. | 01-22-2009 |
20090043965 | EARLY DATA RETURN INDICATION MECHANISM - One embodiment of a method is disclosed. The method generates requests waiting for data to be loaded into a data cache including a first level cache (FLC). The method further receives the requests from instruction sources, schedules the requests, and then passes the requests on to an execution unit having the data cache. Further, the method checks contents of the data cache, replays to the requests if the data is not located in the data cache, and stores the requests that are replay safe. The method further detects the readiness of the data of bus clocks prior to the data being ready to be transmitted to a processor, and transmits an early data ready indication to the processor to drain the requests from a resource scheduler. | 02-12-2009 |
20090049243 | Caching Dynamic Content - Aspects of the subject matter described herein relate to caching dynamic content. In aspects, caching components on a requesting entity and on a content server cache requested content. When a request for content similar to cached content is received, the requesting entity sends a request for the content and an identifier of similar cached content to the content server. The content server obtains the requested content and determines the differences between the requested content and the cached content. The content server then sends the differences to the requesting entity. The requesting entity uses the differences and its cached content to construct the requested content and provides the requested content. | 02-19-2009 |
20090049244 | Data Displacement Bypass System - A data displacement bypass system is disclosed, wherein the data displacement bypass system comprises a CPU (Central Processing Unit), a first memory, a plurality of address lines, a plurality of data lines, an OE (Output Enable) line, a CS (Chip Select) line and a data displacement unit. The CPU could output a plurality of address characters, an OE signal and a CS signal, and receive a plurality of data characters. The first memory and the data displacement unit could output the plurality of data characters according to the plurality of address characters, the OE signal and the CS signal received by the first memory and the data displacement unit, wherein the data displacement unit could govern the plurality of data characters inputting to the CPU by outputting high or low voltage. | 02-19-2009 |
20090049245 | Memory device and method with on-board cache system for facilitating interface with multiple processors, and computer system using same - A memory device includes an on-board cache system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The cache system operates in a manner that can be transparent to a memory controller to which the memory device is connected. Alternatively, the memory controller can control the operation of the cache system. | 02-19-2009 |
20090049246 | APPARATUS AND METHOD OF CACHING FRAME - An apparatus and method of caching a frame is provided. The method of caching a frame includes receiving information on a frame to be cached from a main storage unit, setting an initial value of a specified mode using the received information, and caching the frame from the main storage unit using the specified mode. | 02-19-2009 |
20090049247 | MEDIA CACHE CONTROL INTERFACE - The apparent speed with which a media work is ripped to copy the work into a visible store is substantially reduced. When the media work is played, its content is cached onto a persistent, fast access storage media. If the user subsequently decides to rip the media work, the content of the cache is copied to a visible store in substantially less time than would be required to play the media work and convert it. The user thus perceives that the media work is ripped in a substantially shorter time, compared to that required for ripping the media work in a conventional manner. The ripping process may encode or transform the format of the content to a desired format for use within the visible store. Constraints may be imposed by the user to limit the cache, or the caching process may be hidden from the user. | 02-19-2009 |
20090055587 | Adaptive Caching of Input / Output Data - To improve caching techniques, so as to realize greater hit rates within available memory, of the present invention utilizes a entropy signature from the compressed data blocks to supply a bias to pre-fetching operations. The method of the present invention for caching data involves detecting a data I/O request, relative to a data object, and then selecting appropriate I/O to cache, wherein said selecting can occur with or without user input, or with or without application or operating system preknowledge. Such selecting may occur dynamically or manually. The method further involves estimating an entropy of a first data block to be cached in response to the data I/O request; selecting a compressor using a value of the entropy of the data block from the estimating step, wherein each compressor corresponds to one of a plurality of ranges of entropy values relative to an entropy watermark; and storing the data block in a cache in compressed form from the selected compressor, or in uncompressed form if the value of the entropy of the data block from the estimating step falls in a first range of entropy values relative to the entropy watermark. The method can also include the step of prefetching a data block using gap prediction with an applied entropy bias, wherein the data block is the same as the first data block to be cached or is a separate second data block. The method can also involve the following additional steps: adaptively adjusting the plurality of ranges of entropy values; scheduling a flush of the data block from the cache; and suppressing operating system flushes in conjunction with the foregoing scheduling step. | 02-26-2009 |
20090055588 | Performing Useful Computations While Waiting for a Line in a System with a Software Implemented Cache - Mechanisms for performing useful computations during a software cache reload operation are provided. With the illustrative embodiments, in order to perform software caching, a compiler takes original source code, and while compiling the source code, inserts explicit cache lookup instructions into appropriate portions of the source code where cacheable variables are referenced. In addition, the compiler inserts a cache miss handler routine that is used to branch execution of the code to a cache miss handler if the cache lookup instructions result in a cache miss. The cache miss handler, prior to performing a wait operation for waiting for the data to be retrieved from the backing store, branches execution to an independent subroutine identified by a compiler. The independent subroutine is executed while the data is being retrieved from the backing store such that useful work is performed. | 02-26-2009 |
20090063771 | STRUCTURE FOR REDUCING COHERENCE ENFORCEMENT BY SELECTIVE DIRECTORY UPDATE ON REPLACEMENT OF UNMODIFIED CACHE BLOCKS IN A DIRECTORY-BASED COHERENT MULTIPROCESSOR - A design structure embodied in a machine readable storage medium for designing, manufacturing, and/or testing a design to reduce the number of memory directory updates during block replacement in a system having a directory-based cache is provided. The design structure may be implemented to utilize a read/write bit to determine the accessibility of a cache line and limit memory directory updates during block replacement to regions that are determined to be readable and writable by multiple processors. | 03-05-2009 |
20090083488 | Enabling Speculative State Information in a Cache Coherency Protocol - In one embodiment, the present invention includes a method for receiving a bus message in a first cache corresponding to a speculative access to a portion of a second cache by a second thread, and dynamically determining in the first cache if an inter-thread dependency exists between the second thread and a first thread associated with the first cache with respect to the portion. Other embodiments are described and claimed. | 03-26-2009 |
20090089505 | STEERING DATA UNITS TO A CONSUMER - A computer system may comprise a second device operating as a producer that may steer data units to a first device operating as a consumer. A processing core of the first device may wake-up the second device after generating a first data unit. The second device may generate steering values after retrieving a first data unit directly from the cache of the first device. The second device may populate a flow table with a plurality of entries using the steering values. The second device may receive a packet over a network and store the packet directly into the cache of the first device using a first steering value. The second device may direct an interrupt signal to the processing core of the first device using a second steering value. | 04-02-2009 |
20090094416 | SYSTEM AND METHOD FOR CACHING POSTING LISTS - A method of caching posting lists to a search engine cache calculates the ratios between the frequencies of the query terms in a past query log and the sizes of the posting lists for each term, and uses these ratios to determine which posting lists should be cached by sorting the ratios in decreasing order and storing to the cache those posting lists corresponding to the highest ratio values. Further, a method of finding an optimal allocation between two parts of a search engine cache evaluates a past query stream based on a relationship between various properties of the stream and the total size of the cache, and uses this information to determine the respective sizes of both parts of the cache. | 04-09-2009 |
20090100224 | CACHE MANAGEMENT - Systems, methods and computer readable media for cache management. Cache management can operate to organize pages into files and score the respective files stored in a cache memory. The organized pages can be stored to an optical storage media based upon the organization of the files and based upon the score associated with the files. | 04-16-2009 |
20090100225 | Data processing apparatus and shared memory accessing method - The present invention provides a data processing apparatus for processing data by causing a plurality of function blocks to share a single shared memory, the data processing apparatus including: a memory controller configured to cause the plurality of function blocks to write and read data to/and from the shared memory in response to requests from any one of the function blocks; a cache memory; and a companding section configured to compress the data to be written to the cache memory while expanding the data read therefrom. | 04-16-2009 |
20090100226 | Cache memory device and microprocessor - A cache controller is connected to a processor and a main memory. The cache controller is also connected to a cache memory that can read and write at a speed higher than the main memory. The cache memory is provided with a plurality of cache lines that include a tag area storing an address on the main memory, a capacity area storing a capacity value of a cache block, and a cache block. When a read request is executed from the processor to the main memory, the cache controller checks whether the requested data is present in the cache memory or not. A cache capacity determination unit determines a capacity value for the cache block and supplies to a capacity area. | 04-16-2009 |
20090113130 | SYSTEM AND METHOD FOR UPDATING DIRTY DATA OF DESIGNATED RAW DEVICE - A system and method for updating dirty data of designated raw device is applied in Linux system. A format of a command parameter for updating the dirty data of the designated raw device is determined, to obtain the command parameter with the correct format and transmit it into the Kernel of the Linux system. Then, a data structure of the designated raw device is sought based on the command parameter, to obtain a fast search tree of the designated raw device. Finally, all dirty data pages of the designated raw device are found by the fast search tree, and then are updated into a magnetic disk in a synchronous or asynchronous manner. Therefore, the dirty data of an individual raw device can be updated and written into the magnetic disk without interrupting the normal operation of the system, hereby ensuring secure, convenient, and highly efficient update of the dirty data. | 04-30-2009 |
20090119454 | Method and Apparatus for Video Motion Process Optimization Using a Hierarchical Cache - There are provided method and apparatus for video motion process optimization using a hierarchical cache. A storage method for a video motion process includes configuring a hierarchical cache to have one or more levels, each of the levels of the hierarchical cache corresponding to a respective one of a plurality of levels of a calculation hierarchy associated with calculating sample values for the video motion process. The method also includes storing a particular value for a sample relating to the video motion process in a corresponding level of the hierarchical cache based on which of the plurality of levels of the calculation hierarchy the particular value corresponds to, when the particular value is non-existent in the hierarchical cache. | 05-07-2009 |
20090119455 | METHOD FOR CACHING CONTENT DATA PACKAGES IN CACHING NODES - A method for caching content data packages in caching nodes | 05-07-2009 |
20090132764 | POWER CONSERVATION VIA DRAM ACCESS - Power conservation via DRAM access reduction is provided by a buffer/mini-cache selectively operable in a normal mode and a buffer mode. In the buffer mode, entered when CPUs begin operating in low-power states, non-cacheable accesses (such as generated by a DMA device) matching specified physical address ranges, or having specific characteristics of the accesses themselves, are processed by the buffer/mini-cache, instead of by a memory controller and DRAM. The buffer/mini-cache processing includes allocating lines when references miss, and returning cached data from the buffer/mini-cache when references hit. Lines are replaced in the buffer/mini-cache according to one of a plurality of replacement policies, including ceasing replacement when there are no available free lines. In the normal mode, entered when CPUs begin operating in high-power states, the buffer/mini-cache operates akin to a conventional cache and non-cacheable accesses are not processed therein. | 05-21-2009 |
20090138657 | DATA BACKUP SYSTEM FOR LOGICAL VOLUME MANAGER AND METHOD THEREOF - A data backup system for a logical volume manager (LVM) and a method thereof, capable of realizing data backup in the LVM having a battery backed cache memory (BBCM). The data backup system includes a physical storage device, a BBCM, an LVM, and a data backup function. The physical storage device is used to store data of the LVM. The BBCM is used to provide a plurality of index regions and a plurality of data regions. The LVM is used to manage data save position of the physical storage device. The data backup function is used to look up whether the BBCM saves the data to be backed up by the logical volume. If the BBCM has the data, the BBCM reads out the data to be backed up, and writes the data into a snapshot volume (SV). | 05-28-2009 |
20090138658 | Cache memory system for a data processing apparatus - A data processing apparatus is provided having a cache memory comprising a data storage array and a tag array and a cache controller coupled to the cache memory responsive to a cache access request from processing circuitry to perform cache look ups. The cache memory is arranged such that it has a first memory cell group configured to operate in a first voltage domain and a second memory cell group configured to operate in a second voltage domain that is different from the first voltage domain. A corresponding data processing method is also provided. | 05-28-2009 |
20090144499 | Preemptive write-inhibition for thin provisioning storage subsystem - Write requests from host computers are processed in relation to a thin provisioning storage subsystem. A write request is received from a host computer. The write request identifies a first virtual disk that has been previously assigned to the host computer. It is determined whether the first virtual disk has to be allocated additional physical storage locations of the thin provisioning storage subsystem for storing data associated with the write request. In response to determining that the virtual disk has to be allocated additional physical storage locations, the following is performed. First, a quantity of free space remaining unallocated within physical storage locations of the thin provisioning storage subsystem is determined. Second, where the quantity of free space remaining unallocated within the physical storage locations satisfies a policy threshold associated with a second virtual disk, the second virtual disk is write-inhibited. The first and second virtual disks can be different. | 06-04-2009 |
20090144500 | STORE PERFORMANCE IN STRONGLY ORDERED MICROPROCESSOR ARCHITECTURE - Apparatus and methods relating to store operations are disclosed. In one embodiment, a first storage unit is to store data. A second storage unit is to store the data only after it has become detectable by a bus agent. Moreover, the second storage unit may store an index field for each data value to be stored within the second storage unit. Other embodiments are also disclosed. | 06-04-2009 |
20090150614 | NON-VOLATILE CACHE IN DISK DRIVE EMULATION - A method and apparatus for deferring media writes for emulation drives are provided. By deferring media writes using non-volatile storage, the performance penalty associated with RMW operations may be minimized. Deferring writes may allow the RMW operations to be done while the disk drive is idle. Further, deferring writes may also allow data blocks to be accumulated over time, allowing a full (4K) disk drive block size to be written with a simple write operation, thus making a RMW unnecessary. | 06-11-2009 |
20090150615 | METHOD AND STRUCTURE FOR PRODUCING HIGH PERFORMANCE LINEAR ALGEBRA ROUTINES USING STREAMING - A method (and structure) for executing a linear algebra subroutine on a computer having a cache, includes streaming data for matrices involved in processing the linear algebra subroutine such that data is processed using data for a first matrix stored in the cache as a matrix format and data from a second matrix and a third matrix is stored in a memory device at a higher level than the cache, the streaming providing data from the higher level as the streaming data is required for the processing. | 06-11-2009 |
20090157961 | TWO-SIDED, DYNAMIC CACHE INJECTION CONTROL - A method, system, and computer program product for two-sided, dynamic cache injection control are provided. An I/O adapter generates an I/O transaction in response to receiving a request for the transaction. The transaction includes an ID field and a requested address. The adapter looks up the address in a cache translation table stored thereon, which includes mappings between addresses and corresponding address space identifiers (ASIDs). The adapter enters an ASID in the ID field when the requested address is present in the cache translation table. IDs corresponding to device identifiers, address ranges and pattern strings may also be entered. The adapter sends the transaction to one of an I/O hub and system chipset, which in turn, looks up the ASID in a table stored thereon and injects the requested address and corresponding data in a processor complex when the ASID is present in the table, indicating that the address space corresponding to the ASID is actively running on a processor in the complex. The ASIDs are dynamically determined and set in the adapter during execution of an application in the processor complex. | 06-18-2009 |
20090157962 | CACHE INJECTION USING CLUSTERING - A method and system for cache injection using clustering are provided. The method includes receiving an input/output (I/O) transaction at an input/output device that includes a system chipset or input/output (I/O) hub. The I/O transaction includes an address. The method also includes looking up the address in a cache block indirection table. The cache block indirection table includes fields and entries for addresses and cluster identifiers (IDs). In response to a match resulting from the lookup, the method includes multicasting an injection operation to processor units identified by the cluster ID. | 06-18-2009 |
20090157963 | Contiguously packed data - Data for data elements (e.g., pixels) can be stored in an addressable storage unit that can store a number of bits that is not a whole number multiple of the number of bits of data per data element. Similarly, a number of the data elements can be transferred per unit of time over a bus, where the width of the bus is not a whole number multiple of the number of bits of data per data element. Data for none of the data elements is stored in more than one of the storage units or transferred in more than one unit of time. Also, data for multiple data elements is packaged contiguously in the storage unit or across the width of the bus. | 06-18-2009 |
20090157964 | EFFICIENT DATA STORAGE IN MULTI-PLANE MEMORY DEVICES - A method for data storage includes initially storing a sequence of data pages in a memory that includes multiple memory arrays, such that successive data pages in the sequence are stored in alternation in a first number of the memory arrays. The initially-stored data pages are rearranged in the memory so as to store the successive data pages in the sequence in a second number of the memory arrays, which is less than the first number. The rearranged data pages are read from the second number of the memory arrays. | 06-18-2009 |
20090164725 | Method and Apparatus for Fast Processing Memory Array - The illustrative embodiments described herein provide a computer implemented method, apparatus, and computer program product for increasing efficiency associated with data access. In one illustrative embodiment a memory chip is presented comprising of a plurality of memory units for storing data; a plurality of processing units for processing the data; and a word line and a bit line external to the plurality of memory units, wherein the plurality of processing units directly access the word line and the bit line in accessing the data. | 06-25-2009 |
20090164726 | Programmable Address Processor for Graphics Applications - Methods and systems for processing memory lookup requests are provided. In an embodiment, an address processing unit includes an instructions module configured to store instructions to be executed to complete a primary memory lookup request and a logic unit coupled to the instructions module. The primary memory lookup request is associated with a desired address. Based on an instruction stored in the instructions module, the logic unit is configured to generate a secondary memory lookup request that requests the desired address. | 06-25-2009 |
20090164727 | Handling of hard errors in a cache of a data processing apparatus - A data processing apparatus and method are provided for handling hard errors occurring in a cache of the data processing apparatus. The cache storage comprising data storage having a plurality of cache lines for storing data values, and address storage having a plurality of entries, with each entry identifying for an associated cache line an address indication value, and each entry having associated error data. In response to an access request, a lookup procedure is performed to determine with reference to the address indication value held in at least one entry of the address storage whether a hit condition exists in one of the cache lines. Further, error detection circuitry determines with reference to the error data associated with the at least one entry of the address storage whether an error condition exists for that entry. Additionally, cache location avoid storage is provided having at least one record, with each record being used to store a cache line identifier identifying a specific cache line. On detection of the error condition, one of the records in the cache location avoid storage is allocated to store the cache line identifier for the specific cache line associated with the entry for which the error condition was detected. Further, the error detection circuitry causes a clean and invalidate operation to be performed in respect of the specific cache line, and the access request is then re-performed. The cache access circuitry is arranged to exclude any specific cache line identified in the cache location avoid storage from the lookup procedure. This mechanism provides a very simple and effective mechanism for handling hard errors that manifest themselves within a cache during use, so as to ensure correct operation of the cache in the presence of such hard errors. Further, the technique can be employed not only in association with write through caches but also write back caches, thus providing a very flexible solution. | 06-25-2009 |
20090164728 | SEMICONDUCTOR MEMORY DEVICE AND SYSTEM USING SEMICONDUCTOR MEMORY DEVICE - A semiconductor memory device includes a data storage region which includes a plurality of unit data regions storing data, an information storage region which includes a plurality of unit information regions each storing information related to the data stored in associated one of the unit data regions, and an address generation circuit which generates an address designating one of the unit data regions and one of the unit information regions associated with each other. | 06-25-2009 |
20090172283 | Reducing minimum operating voltage through hybrid cache design - Methods and apparatus to reduce minimum operating voltage through a hybrid cache design are described. In one embodiment, a cache with different size bit cells may be used, e.g., to reduce minimum operating voltage of an integrated circuit device that includes the cache and possibly other logic (such as a processor). Other embodiments are also described. | 07-02-2009 |
20090177840 | System and Method for Servicing Inquiry Commands About Target Devices in Storage Area Network - Inquiry data received from sequential target devices is stored in a cache memory. In one embodiment, the cache memory is coupled to a router. In one embodiment, when the router receives from a host an inquiry command about a target, the router first checks to see if the inquiry command can be serviced from the cache. If so, the inquiry data about the target is retrieved from the cache and returned to the host. If not, the router checks to see if the target is busy. If not busy, the router routes the inquiry command to the target and stores the inquiry data returned by the target in the cache. If the target is busy, the router places the inquiry command in a queue. When the target becomes available, the router forwards the inquiry command to the target for processing, thereby keeping the inquiry command from timing out. | 07-09-2009 |
20090182941 | Web Server Cache Pre-Fetching - A method and apparatus for a server that includes a file processor that interprets each requested data file, such as a web page, requested by a client in a process analogous to that of a browser application or other requesting application. The file processor initiates the loading of each referenced data item within the requested document in anticipation that the client will make the same requests upon receiving the requested data file. Each referenced data item is loaded into the server cache. When the client browser application requests these referenced data items they can be returned to the client browser application without accessing a slower persistent data storage. The requested data items are loaded from the server cache, which has a faster access time than the persistent data storage. | 07-16-2009 |
20090198891 | Issuing Global Shared Memory Operations Via Direct Cache Injection to a Host Fabric Interface - A data processing system enables global shared memory (GSM) operations across multiple nodes with a distributed EA-to-RA mapping of physical memory. Each node has a host fabric interface (HFI), which includes HFI windows that are assigned to at most one locally-executing task of a parallel job. The tasks perform parallel job execution, but map only a portion of the effective addresses (EAs) of the global address space to the local, real memory of the task's respective node. The HFI window tags all outgoing GSM operations (of the local task) with the job ID, and embeds the target node and HFI window IDs of the node at which the EA is memory mapped. The HFI window also enables processing of received GSM operations with valid EAs that are homed to the local real memory of the receiving node, while preventing processing of other received operations without a valid EA-to-RA local mapping. | 08-06-2009 |
20090198892 | RAPID CACHING AND DATA DELIVERY SYSTEM AND METHOD - The initial systems analysis of a new data source fully defines each data element and also designs, tests and encodes complete data integration instructions for each data element. A metadata cache stores the data element definition and data element integration instructions. The metadata cache enables a comprehensive view of data elements in an enterprise data architecture. When data is requested that includes data elements defined in a metadata cache, the metadata cache and associated software modules automatically generate database elements to fully integrate the requested data elements into existing databases. | 08-06-2009 |
20090198893 | Microprocessor systems - A memory management arrangement includes a memory management unit | 08-06-2009 |
20090198894 | Method Of Updating IC Instruction And Data Cache - A method of updating a cache in an integrated circuit is provided. The integrated circuit incorporates the cache, memory and a memory interface connected to the cache and memory. Following a cache miss, the method fetches, using the memory interface, first data associated with the cache miss and second data from the memory, where the second data is stored in the memory adjacent the first data, and updates the cache with the fetched first and second data via the memory interface. The cache includes instruction and data cache, the method performing arbitration between instruction cache misses and data cache misses such that the fetching and updating are performed for data cache misses before instruction cache misses. | 08-06-2009 |
20090204761 | PSEUDO-LRU CACHE LINE REPLACEMENT FOR A HIGH-SPEED CACHE - Embodiments of the present invention provide a system that replaces an entry in a least-recently-used way in a skewed-associative cache. The system starts by receiving a cache line address. The system then generates two or more indices using the cache line address. Next, the system generates two or more intermediate indices using the two or more indices. The system then uses at least one of the two or more indices or the two or more intermediate indices to perform a lookup in one or more lookup tables, wherein the lookup returns a value which identifies a least-recently-used way. Next, the system replaces the entry in the least-recently-used way. | 08-13-2009 |
20090210622 | COMPRESSED CACHE IN A CONTROLLER PARTITION - A method of extending functionality of a data storage facility by adding to the primary storage system new functions using extension function subsystems is disclosed. One example of extending the functionality includes compressing and caching data in a data storage facility to improve storage and access performance of the data storage facility. A primary storage system queries a data storage extension system for availability of data tracks. If the primary storage system does not receive a response or the data tracks from the data storage extension system, it continues caching by fetching data tracks from a disk storage system. The storage extension system manages compression/decompression of data tracks in response to messages from the primary storage system. Data tracks transferred from the data storage extension system to the primary storage system are marked as stale at the data storage extension system and are made available for deletion. | 08-20-2009 |
20090210623 | SYSTEM AND ARTICLE OF MANUFACTURE FOR STORING DATA - Provided are a system, and article of manufacture, wherein a first storage unit is coupled to a second storage unit. The first storage unit and the second storage unit are detected. A determination is made that the first storage unit is capable of responding to a write operation faster than the second storage unit, and that the second storage unit is capable of responding to a read operation at least as fast as the first storage unit. Data is written to the first storage unit. A transfer of the data is initiated from the first storage unit to the second storage unit. The data is read from the second storage unit, in response to a read request directed at both the first and the second storage units. | 08-20-2009 |
20090216947 | SYSTEM, METHOD AND PROCESSOR FOR ACCESSING DATA AFTER A TRANSLATION LOOKASIDE BUFFER MISS - Data is accessed in a multi-level hierarchical memory system. A request for data is received, including a virtual address for accessing the data. A translation buffer is queried to obtain an absolute address corresponding to the virtual address. Responsive to the translation buffer not containing an absolute address corresponding to the virtual address, the absolute address is obtained from a translation unit. A directory look-up is performed with the absolute address to determine whether a matching absolute address exists in the directory. A fetch request for the requested data is sent to a next level in the multi-level hierarchical memory system. Processing of the fetch request by the next level occurs in parallel with the directory lookup. The requested data is received in the primary cache to make the requested data available to be written to the primary cache. | 08-27-2009 |
20090216948 | METHOD FOR SUBSTANTIALLY UNINTERRUPTED CACHE READOUT - A memory device capable of sequentially outputting multiple pages of cached data while mitigating any interruption typically caused by fetching and transferring operations. The memory device outputs cached data from a first page while data from a second page is fetched into sense amplifier circuitry. When the outputting of the first page reaches a predetermined transfer point, a portion of the fetched data from the second page is transferred into the cache at the same time the remainder of the cached first page is being output. The remainder of the second page is transferred into the cache after all of the data from the first page is output while the outputting of the first portion of the second page begins with little or no interruption. | 08-27-2009 |
20090235026 | DATA TRANSFER CONTROL DEVICE AND DATA TRANSFER CONTROL METHOD - A disclosed data transfer control device includes a main memory unit; a cache memory unit; a command generation unit configured to generate a command to read out data from the main memory unit in accordance with a first address input to the command generation unit; and a storage unit configured to store an information item indicating whether the first address and data corresponding to the first address are stored in the cache memory unit. In the data transfer control device, when the information item stored in the storage unit indicates that there are no data corresponding to the first address in the cache memory unit, the command generation unit generates the command based on the first address before output of data corresponding to a second address that is input immediately before the first address is input. | 09-17-2009 |
20090240887 | INFORMATION PROCESSING UNIT, PROGRAM, AND INSTRUCTION SEQUENCE GENERATION METHOD - An information processing unit includes at least one cache memory provided between an instruction execution section and a storage section and a control section controlling content of address information based on a result of comparison processing between an address requested by a hardware prefetch request issuing section for memory access and address information held in an address information holding section, wherein when the control section causes the address information holding section to hold address information or address information in the address information holding section is updated, overwrite processing on the address information is inhibited for a predetermined time. | 09-24-2009 |
20090248982 | CACHE CONTROL APPARATUS, INFORMATION PROCESSING APPARATUS, AND CACHE CONTROL METHOD - A cache control apparatus determines whether to adopt or not data acquired by a speculative fetch by monitoring a status of the speculative fetch which is a memory fetch request output before it becomes clear whether data requested by a CPU is stored in a cache of the CPU and time period obtained by adding up the time period from when the speculative fetch is output to when the speculative fetch reaches a memory controller and time period from completion of writing of data to a memory which is specified by a data write command that has been issued, before issuance of the speculative fetch, for the same address as that for which the speculative fetch is issued to when a response of the data write command is returned. | 10-01-2009 |
20090254706 | METHOD AND SYSTEM FOR APPROXIMATING OBJECT SIZES IN AN OBJECT-ORIENTED SYSTEM - A method and system for increasing a system's performance and achieving improved memory utilization by approximating the memory sizes that will be required for data objects that can be deserialized and constructed in a memory cache. The method and system may use accurate calculations or measurements of similar objects to calibrate the approximate memory sizes. | 10-08-2009 |
20090254707 | Partial Content Caching - A network device, known as an appliance, is located in the data path between a client and a server. The appliance includes a cache that is used to cache static and near-static cacheable content items. When a request is received, the appliance determines whether any portion of the requested data is available in its cache; if so, that portion can be serviced by the appliance. If any portion of the requested content is dynamic and cannot be serviced by the cache, the dynamic portion is generated by the appliance or obtained from another source such as an application server. The appliance integrates the content retrieved from the cache, the dynamically generated content, and the content received from other sources to generate a response to the original content request. The present invention thus implements partial content caching for content that has a cached portion and a portion to be dynamically generated. | 10-08-2009 |
20090254708 | METHOD AND APPARATUS FOR DELIVERING AND CACHING MULTIPLE PIECES OF CONTENT - Aspects relate to systems and methods for providing the ability to customize content delivery. A device can cache multiple presentations. The device can establish a cache depth upon initiation of the subscription service. The device can provide an interface to select a cache depth. The cache depth can be the number of presentations the device will maintain on the device at a given time. | 10-08-2009 |
20090254709 | Prediction Mechanism for Subroutine Returns in Binary Translation Sub-Systems of Computers - A sequence of input language (IL) instructions of a guest system is converted, for example by binary translation, into a corresponding sequence of output language (OL) instructions of a host system, which executes the OL instructions. In order to determine the return address after any IL call to a subroutine at a target entry address P, the corresponding OL return address is stored in an array at a location determined by an index calculated as a function of P. After completion of execution of the OL translation of the IL subroutine, execution is transferred to the address stored in the array at the location where the OL return address was previously stored. A confirm instruction block is included in each OL call site to determine whether the transfer was to the correct or incorrect call site, and a back-up routine is included to handle the cases of incorrect call sites. | 10-08-2009 |
20090276571 | Enhanced Direct Memory Access - A method for facilitating direct memory access in a computing system in response to a request to transfer data is provided. The method comprises selecting a thread for transferring the data, wherein the thread executes on a processing core within the computing system; providing the thread with the request, wherein the request comprises information for carrying out a data transfer; and transferring the data according to the request. The method may further comprise: coordinating the request with a memory management unit, such that virtual addresses may be used to transfer data; invalidating a cache line associated with the source address or flushing a cache line associated with the destination address, if requested. Multiple threads can be selected to transfer data based on their proximity to the destination address. | 11-05-2009 |
20090276572 | Memory Management Among Levels of Cache in a Memory Hierarchy - Methods, apparatus, and product for memory management among levels of cache in a memory hierarchy in a computer with a processor operatively coupled through two or more levels of cache to a main random access memory, caches closer to the processor in the hierarchy characterized as higher in the hierarchy, including: identifying a line in a first cache that is preferably retained in the first cache, the first cache backed up by at least one cache lower in the memory hierarchy, the lower cache implementing an LRU-type cache line replacement policy; and updating LRU information for the lower cache to indicate that the line has been recently accessed. | 11-05-2009 |
20090276573 | Transient Transactional Cache - In one embodiment, a processor comprises an execution core, a level 1 (L1) data cache coupled to the execution core and configured to store data, and a transient/transactional cache (TTC) coupled to the execution core. The execution core is configured to generate memory read and write operations responsive to instruction execution, and to generate transactional read and write operations responsive to executing transactional instructions. The L1 data cache is configured to cache memory data accessed responsive to memory read and write operations to identify potentially transient data and to prevent the identified transient data from being stored in the L1 data cache. The TTC is also configured to cache transaction data accessed responsive to transactional read and write operations to track transactions. Each entry in the TTC is usable for transaction data and for transient data. | 11-05-2009 |
20090276574 | ARITHMETIC DEVICE, ARITHMETIC METHOD, HARD DISC CONTROLLER, HARD DISC DEVICE, PROGRAM CONVERTER, AND COMPILER - This arithmetic device includes: a first memory to store a first program; a first arithmetic module to read the first program from the first memory to execute the first program; a second memory to store a second program which is embedded in processing of the first program and called from the first arithmetic module and executed, and whose access speed is lower than the first memory; a third memory storing data temporarily and whose access speed is higher than the second memory; a second arithmetic module to read the second program from the second memory and store in a third memory; and a third arithmetic module to read the second program from the third memory to execute the second program, in accordance with a call from the first arithmetic module to execute the first program. | 11-05-2009 |
20090276575 | INFORMATION PROCESSING APPARATUS AND COMPILING METHOD - According to one embodiment, an information processing apparatus includes a processor, a cache, and a cache controller. The processor is configured to output a memory access request for accessing an entity of a variable stored in a variable-storage region provided in a memory by using first or second memory address. Both the first and second memory addresses are allocated to the variable-storage region. The cache is configured to store some of data items stored in the memory. The cache controller is configured to access the memory or the cache by using a memory address designating the variable-storage region, in accordance with one of the first and second memory addresses which is included in a memory access request coming from the processor. | 11-05-2009 |
20090287883 | Least recently used ADC - During the last 75 years Analog to Digital converters revolutionized the signal processing industry. As transistor sizes reduced, higher resolution of bits is achieved. But FLASH and other full blown faster ADC implementations always consumed relatively higher power. As the analog signal comes into ADC frontend, conversion is initiated from the beginning. ADC conversion process is a highly mathematical number system problem, especially FLASH ADCs are. With faster, low power, and partitioned ADCS, better solutions can be built in so many vast expanding signal processing fields. It is time to come up with logical ADCS instead of brute force, start from the beginning conversion for every sample of analog signal. When the signal does not change abruptly, there is room for applying CACHE principles as it is done in this invention! The approach is to use a smaller ADC for full blown start from the beginning conversions and store it in upfront signal path as CACHED value. Then start using that Cached value set. There must be a balance between number of Cache entries, consumed power, and backend full blown ADC. It is obvious, backend ADC is rarely engaged in conversion when there are too many cache hits, which is desirable. | 11-19-2009 |
20090292877 | EVENT SERVER USING CACHING - An event server adapted to receive events from an input stream and produce an output event stream. The event server uses a processor using code in an event processing language to process the events. The event server obtaining input events from and/or producing output events to a cache. | 11-26-2009 |
20090292878 | METHOD AND SYSTEM FOR PROVIDING DIGITAL RIGHTS MANAGEMENT FILES USING CACHING - A method for providing DRM files using caching includes identifying DRM files to be displayed in a file list in response to a request, decoding a number of first DRM files from among the identified DRM files and caching the first DRM files in a first memory space, and reading the first DRM files in the first memory space in response to the request. Then, a system displays the first DRM files as a file list in a display area. The second DRM files from among the identified DRM files other than the first DRM files are not initially decoded, and file data related to the second DRM files are cached in a second memory space. DRM files from among the second DRM files are subsequently decoded in response to a subsequent command. | 11-26-2009 |
20090292879 | Nodma cache - A NoDMA cache including a super page field. The super page field indicates when a set of pages contain protected information. The NoDMA cache is used by a computer system to deny I/O device access to protected information in system memory. | 11-26-2009 |
20090300286 | METHOD FOR COORDINATING UPDATES TO DATABASE AND IN-MEMORY CACHE - A computer method and system of caching. In a multi-threaded application, different threads execute respective transactions accessing a data store (e.g. database) from a single server. The method and system represent status of datastore transactions using respective certain (e.g. Future) parameters. | 12-03-2009 |
20090300287 | Method and apparatus for controlling cache memory - An apparatus for controlling a cache memory that stores therein data transferred from a main storing unit includes a computing processing unit that executes a computing process using data, a connecting unit that connects an input portion and an output portion of the cache memory, a control unit that causes data in the main storing unit to be transferred to the output portion of the cache memory through the connecting unit when the data in the main storing unit is input from the input portion of the cache memory into the cache memory, and a transferring unit that transfers data transferred by the control unit to the output portion of the cache memory, to the computing processing unit. | 12-03-2009 |
20090307428 | INCREASING REMOTE DESKTOP PERFORMANCE WITH VIDEO CACHING - Described techniques improve remote desktop responsiveness by caching an image of a desktop when the host operating system running on the remote desktop server stores graphics output in video memory. Once cached, a Tile Desktop Manager may prioritize the scanning of regions or tiles of the cached image based data received from the operating system. Once regions or tiles that have changed are detected, the changed tiles are copied from the cached desktop image and transmitted to the remote desktop client. The cached desktop image is refreshed based on a feedback loop. | 12-10-2009 |
20090307429 | Storage system, storage subsystem and storage control method - Proposed is a storage system capable of preventing the compression of a cache memory caused by data remaining in a cache memory of a storage subsystem without being transferred to a storage area of an external storage, and maintaining favorable I/O processing performance of the storage subsystem. In this storage system where an external storage is connected to the storage subsystem and the storage subsystem provides a storage area of the external storage as its own storage area, provided is a volume for saving dirty data remaining in a cache memory of the storage subsystem without being transferred to the external volume. The storage system recognizes the compression of the cache memory, and eliminates the overload of the cache memory by saving dirty data in a save volume. | 12-10-2009 |
20090327607 | Apparatus and method for cache utilization - In some embodiments, an electronic system may include a cache located between a mass storage and a system memory, and code stored on the electronic system to prevent storage of stream data in the cache and to send the stream data directly between the system memory and the mass storage based on a comparison of first metadata of a first request for first information and pre-boot stream information stored in a previous boot context. Other embodiments are disclosed and claimed. | 12-31-2009 |
20090327608 | Accelerated resume from hibernation in a cached disk system - Various embodiments of the invention use a non-volatile (NV) memory to store hiberfile data before entering a hibernate state, and retrieve the data upon resume from hibernation. Unlike conventional systems, the reserve space in the NV memory (i.e., the erased blocks available to be used while in the run-time mode) may be used to store hiberfile data. Further, a write-through cache policy may be used to assure that all of the hiberfile data saved in cache will also be stored on the disk drive during the hibernation, so that if the cache and the disk drive are separated during hibernation, the full correct hiberfile data will still be available for a resume operation. | 12-31-2009 |
20090327609 | Performance based cache management - Methods and apparatus to manage cache memory are disclosed. In one embodiment, an electronic device comprises a first processing unit, a first cache memory, and a first cache controller, and a power management module, wherein the power management module determines at least one operating parameter for the cache memory and passes the at least one operating parameter for the cache memory to a cache controller. Further, the first cache controller manages the cache memory according to the at least one operating parameter, and the power management module evaluates, in the power management module, operating data for the cache memory from the cache controller, and generates, in the power management module, at least one modified operating parameter for the cache memory based on the operating data for the cache memory from the cache controller. | 12-31-2009 |
20100023690 | CACHING DYNAMIC CONTENTS AND USING A REPLACEMENT OPERATION TO REDUCE THE CREATION/DELETION TIME ASSOCIATED WITH HTML ELEMENTS - An event to delete a structured object of a Web page rendered in a browser can be detected. The structured object comprises an HTML element set that was dynamically created for the Web page. The structured object can be placed in a cache without deleting memory allocations for the structured object. An event to dynamically create a new object of the Web page can be detected. The cache can be queried to find an object with structure equivalent to that of the new object. The found object can be taken from the cache and used as the new object after content of the cached object is replaced with that needed for the new object. Memory allocation and deallocation costs that would otherwise be needed to dispose of a dynamic HTML element set and to create a new HTML element set are thus saved using the cache. | 01-28-2010 |
20100023691 | SYSTEM AND METHOD FOR IMPROVING A BROWSING RATE IN A HOME NETWORK - A system and method for improving a browsing rate in a Universal Plug and Play (UPnP) Audio/Video (AV) home network. A control point predicts browse data using a pre-fetching operation and pre-fetches and stores the predicted browse data, which is temporarily stored in a cache implemented in the control point. Accordingly, when a user has selected a corresponding container, the control point displays the pre-fetched browse data. The user can directly use the browse data and experiences a fast response. | 01-28-2010 |
20100023692 | MODULAR THREE-DIMENSIONAL CHIP MULTIPROCESSOR - A chip multiprocessor die supports optional stacking of additional dies. The chip multiprocessor includes a plurality of processor cores, a memory controller, and stacked cache interface circuitry. The stacked cache interface circuitry is configured to attempt to retrieve data from a stacked cache die if the stacked cache die is present but not if the stacked cache die is absent. In one implementation, the chip multiprocessor die includes a first set of connection pads for electrically connecting to a die package and a second set of connection pads for communicatively connecting to the stacked cache die if the stacked cache die is present. Other embodiments, aspects and features are also disclosed. | 01-28-2010 |
20100023693 | Method and system for tiered distribution in a content delivery network - A tiered distribution service is provided in a content delivery network (CDN) having a set of surrogate origin (namely, “edge”) servers organized into regions and that provide content delivery on behalf of participating content providers, wherein a given content provider operates an origin server. According to the invention, a cache hierarchy is established in the CDN comprising a given edge server region and either (a) a single parent region, or (b) a subset of the edge server regions. In response to a determination that a given object request cannot be serviced in the given edge region, instead of contacting the origin server, the request is provided to either the single parent region or to a given one of the subset of edge server regions for handling, preferably as a function of metadata associated with the given object request. The given object request is then serviced, if possible, by a given CDN server in either the single parent region or the given subset region. The original request is only forwarded on to the origin server if the request cannot be serviced by an intermediate node. | 01-28-2010 |
20100030963 | MANAGING STORAGE OF CACHED CONTENT - A method of controlling storage of content on a storage device includes communicating with a storage device configured to cache content; and determining a storage cost for caching a first set of data objects on the storage device. The determining is based, at least in part, on characteristics of the first set of data objects and on characteristics of the storage device. Also provided is a storage system that includes a storage device capable of caching media content, a storage device agent and a cache manager. The storage device agent is operative to communicate with the storage device and with the cache manager, and to provide a storage cost to the cache manager. The storage device agent determines the storage cost for caching a data object on the storage device based, at least in part, on characteristics of the data object and on characteristics of the storage device. | 02-04-2010 |
20100042783 | DATA VAULTING IN EMERGENCY SHUTDOWN - A method for data storage includes accepting write commands belonging to a storage operation invoked by a host computer, and caching the write commands in a volatile memory that is powered by external electrical power. A current execution status of the storage operation is also cached in the volatile memory. | 02-18-2010 |
20100042784 | METHOD FOR COMMUNICATION BETWEEN TWO MEMORY-RELATED PROCESSES IN A COMPUTER SYSTEM, CORRESPONDING SOFTWARE PRODUCT, COMPUTER SYSTEM AND PRINTING SYSTEM - For optimized communication between two memory-related processes in a computer system, a synchronization function is coupled with an operating system function such that it withholds an output of an operating system message that signals a data end of a file in a memory region of the computer system. It can thus be avoided that a memory read process interrupts the reading of the file because a memory write process has not yet written all data of the file into the corresponding memory region. | 02-18-2010 |
20100049920 | DYNAMICALLY ADJUSTING WRITE CACHE SIZE - A storage system includes a backend storage unit for storing electronic information; a controller unit for controlling reading and writing to the backend storage unit; and at least one of a cache and a non-volatile storage for storing the electronic information during at least one of the reading and the writing; the controller unit executing machine readable and machine executable instructions including instructions for: testing if a frequency of non-volatile storage full condition has occurred one of above and below an upper threshold frequency value and a lower threshold frequency value; if the frequency of the condition has exceeded a threshold frequency value, then calculating a new size; calculating an expected average response time for the new size; comparing actual response time to the expected response time; and one of adjusting and not adjusting a size of the non-volatile storage to minimize the response time. | 02-25-2010 |
20100057993 | INTER-FRAME TEXEL CACHE - Methods, apparatuses, and systems are presented for caching. A cache memory area may be used for storing data from memory locations in an original memory area. The cache memory area may be used in conjunction with a repeatedly updated record of storage associated with the cache memory area. The repeatedly updated record of storage can thus provide a history of data storage associated with the cache memory area. The cache memory area may be loaded with entries previously stored in the cache memory area, by utilizing the repeatedly updated record of storage. In this manner, the record may be used to “warm up” the cache memory area, loading it with data entries that were previously cached and may be likely to be accessed again if repetition of memory accesses exists in the span of history captured by the repeatedly updated record of storage. | 03-04-2010 |
20100064105 | Leveraging Synchronous Communication Protocols to Enable Asynchronous Application and Line-of-Business Behaviors - Methods and systems of leveraging synchronous communication protocols to enable asynchronous application and line of business behaviors. An application platform may be provided and configured to provide a pending state for any synchronous operation. The pending state may indicate that the operation has not been completed yet. For an application which may know how to track an operation that has a pending state, the application may control when the operation enters and exits the pending state. The application may communicate to the application platform to hold off on other operations dependent upon the pending operation when the pending operation is not complete. For an application which does not know how to track an operation that has a pending state, the application platform may ignore the pending state of the operation and proceed to other operations. Accordingly, the synchronous user experience is preserved where a straightforward, down-level user interface and experience is appropriate. The user interface and experience is also extended when an application knows how to interpret and present the asynchronous nature of various underlying systems. | 03-11-2010 |
20100070708 | ARITHMETIC PROCESSING APPARATUS AND METHOD - An apparatus includes a TLB storing a part of a TSB area included in a memory accessed by the apparatus. The TSB area stores an address translation pair for translating a virtual address into a physical address. The apparatus further includes a cache memory that temporarily stores the pair; a storing unit that stores a starting physical address of the pair stored in the memory unit; a calculating unit that calculates, based on the starting physical address and a virtual address to be converted, a TSB pointer used in obtaining from the TSB area a corresponding address translation pair corresponding to the virtual address to be converted; and an obtaining unit that obtains the corresponding pair from the TSB area using the TSB pointer calculated and stores the corresponding pair in the cache memory, if the corresponding pair is not retrieved from the TLB or the cache memory. | 03-18-2010 |
20100077142 | EFFICIENTLY CREATING A SNAPSHOT OF A LARGE CONSISTENCY GROUP - Preparation of a snapshot for data storage includes receiving a first command to prepare to create a snapshot of a set of data stored on at least one source storage volume in a data storage system. The data storage system is prepared to expedite creation of the snapshot in response to the first command. A second command to create the snapshot is received subsequent to the first command. The snapshot is created, in response to the second command, by copying the set of data onto at least one target storage volume at an event time. | 03-25-2010 |
20100077143 | Monitoring a data processing apparatus and summarising the monitoring data - A data processing apparatus is disclosed that comprises monitoring circuitry for monitoring accesses to a plurality of addressable locations within said data processing apparatus that occur between start and end events said monitoring circuitry comprising: an address location store for storing data identifying said plurality of addressable locations to be monitored and a monitoring data store; said monitoring circuitry being responsive to detection of said start event to detect accesses to said plurality of addressable locations and to store monitoring data relating to a summary of said detected accesses in said monitoring data store; and said monitoring circuitry being responsive to detection of said end event to stop collecting said monitoring data; said monitoring circuit being responsive to detection of a flush event to output said stored monitoring data and to flush said monitoring data store. | 03-25-2010 |
20100077144 | TRANSPARENT RESOURCE ADMINISTRATION USING A READ-ONLY DOMAIN CONTROLLER - A domain controller hierarchy in accordance with implementations of the present invention involves one or more local domain controllers, such as one or more read-only local domain controllers in communication with one or more writable hub domain controllers. The local domain controllers include a resource manager, such as a Security Account Manager (“SAM”), that manages resources and/or other accounts information received from the writable hub domain controller. When a local user attempts to change the resource at the local domain controller, however, the resource manager chains the request, along with any appropriate identifiers for the request, to the writable hub domain controller, where the request is processed. If appropriate, the hub domain controller sends a response that the resource has been updated as requested and also sends a copy of the updated resource to be cached at the local domain controller. | 03-25-2010 |
20100082903 | NON-VOLATILE SEMICONDUCTOR MEMORY DRIVE, INFORMATION PROCESSING APPARATUS AND DATA ACCESS CONTROL METHOD OF THE NON-VOLATILE SEMICONDUCTOR MEMORY DRIVE - According to one embodiment, a non-volatile semiconductor memory drive stores an address table in a non-volatile semiconductor memory in predetermined units that are storage units of data in the non-volatile semiconductor memory, manages a second address table associating a logical address with a physical address with respect to each part of the address table stored in the non-volatile semiconductor memory, and temporarily stores each part of the address table which has been read in the predetermined units from the non-volatile semiconductor memory in the cache memory based on the second address table. | 04-01-2010 |
20100088470 | OPTIMIZING INFORMATION LIFECYCLE MANAGEMENT FOR FIXED STORAGE - The method may query the disk drive for a size where size may be a total number of logical blocks on the disk drive. The drive may receive a size response where the size includes a total number of logical blocks on the disk drive. The number of usage blocks necessary to represent the number of logical blocks on the disk drive may then be determined and usage data may be stored in the usage blocks. The data may be stored in the buffer of the disk drive. The data may also be stored in the DDF of a RAID drive. The data may be used to permit incremental backups of disk drives by backing up only the blocks that are indicated as having been changed. In addition, information about the access to the drive may be collected and stored for later analysis. | 04-08-2010 |
20100095064 | Pattern Matching Technique - A method, system and program are disclosed for accelerating data storage in a cache appliance that transparently monitors NFS and CIFS traffic between clients and NAS subsystems and caches files in a cache memory by using a perfect hashing memory index technique to rapidly detect predetermined patterns in received packet payloads and retrieve matching patterns from memory by generating a data structure pointer and index offset to directly address the pattern in the datagram memory, thereby accelerating evaluation of the packet with the matching pattern by the host processor. | 04-15-2010 |
20100100680 | STORAGE APPARATUS AND CACHE CONTROL METHOD - The object of the present invention is to provide a storage apparatus capable of optimizing the cache-resident area in a case where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. In the storage apparatus, in a case where an LU that is a logical space resident in the cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size, when the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory. | 04-22-2010 |
20100106910 | CACHE MEMORY AND METHOD OF CONTROLLING THE SAME - It is an object of the present invention to reduce output of a WAIT signal to maintain data consistency to effectively process subsequent memory access when there is no subsequent memory access in case of miss hit in a cache memory having a multi-stage pipeline structure. A cache memory according to the present invention performs update processing of a tag memory and a data memory and decides whether or not there is a subsequent memory access upon decision by a hit decision unit that an input address is a miss hit. Upon decision that there is a subsequent memory access, a controller outputs a WAIT signal to generate a pipeline stall for the pipeline processing of the processor to the processor, while the controller does not output a WAIT signal upon decision that there is no subsequent memory access. | 04-29-2010 |
20100115202 | METHODS AND SYSTEMS FOR MICROCODE PATCHING - Methods and systems for performing microcode patching are presented. In one embodiment, a data processing system comprises a cache memory and a processor. The cache memory comprises a plurality of cache sections. The processor sequesters one or more cache sections of the cache memory and stores processor microcode therein. In one embodiment, the processor executes the microcode in the one or more cache sections. | 05-06-2010 |
20100115203 | MUTABLE OBJECT CACHING - In one embodiment, a method for caching mutable objects comprises adding to a cache a first cache entry that includes a first object and a first key. Assigning a unique identification to the first object. Adding an entry to an instance map for the first object. The entry includes the unique identification and the first object. Creating a data structure that represents the first object. The data structure includes information relevant to the current state of the first object. A second cache entry is then added to the cache. The second cache entry includes the data structure and the unique identification. Updating the first cache entry to replace the first object with the unique identification. | 05-06-2010 |
20100131710 | METHOD AND APPARATUS FOR SHARING CONTENT BETWEEN PORTALS - A method and apparatus for enabling a first portal to receive and present or otherwise use content from a second portal. The first portal comprises an indication to a location within the second portal. During execution of the first portal, the indication, such as a shortcut is parsed, a connection between the first portal and the second portal is created, and requests and responses related to the content are exchanged between the first and the second portal. The shortcuts enable the loose coupling between the portals and avoid the need for managing multiple versions of the component providing the data or tight coupling In addition, the method and apparatus enable the execution of a non-executable unit of the second portal from an environment of the first portal. The method and apparatus can be used in a transitive manner, such that a first portal will use content from a second portal, which in turn uses content from a third portal. | 05-27-2010 |
20100131711 | SERIAL INTERFACE CACHE CONTROLLER, CONTROL METHOD AND MICRO-CONTROLLER SYSTEM USING THE SAME - A serial interface cache controller, control method and micro-controller system using the same. The controller includes L rows of address tags, wherein each row of address tags includes an M-bits block tag and an N-bits valid area tag. The M-bits block tag records an address block of T-byte data stored in an internal cache memory, and the N-bits valid area tag records valid bit sectors in the address block. Each valid bit sector has the size of T/N bytes. The controller needs to read T/N bytes of data from an external memory to the internal cache memory at each time without the need of reading the T-byte data of the whole address block. Because the T-byte data of the whole address block is not necessary to be read by the micro-controller, the waiting time of the micro-controller may be shortened, and the performance can be increased. | 05-27-2010 |
20100153644 | ON DEMAND JAVA APPLICATION MANAGER - A system, method and program product for providing an on demand Java application manager. A system is provided that includes: a bootstrap system for setting up a cache within a local storage and pointing to at least one application at a network resource; a class loader that loads class files for a selected application into the JVM in an on demand fashion, wherein the class loader searches for a requested class file initially in the cache and if not present downloads the requested class file from the network resource to the cache; and a disk management system that manages storage space in the cache, wherein the disk management system includes a facility for discarding class files from the cache. | 06-17-2010 |
20100153645 | Cache control apparatus and method - A cache control apparatus and method are provided. The cache control apparatus may include a parameter input unit to receive a first parameter corresponding to a block-level cache in a main memory, a cache index extraction unit to extract a cache index from the first parameter, a cache tag extraction unit to extract a cache tag from the first parameter, and a comparison unit to determine whether a cache hit occurs using the cache index and the cache tag. | 06-17-2010 |
20100161901 | Correction of incorrect cache accesses - The application describes a data processor operable to process data, and comprising: a cache in which a storage location of a data item within said cache is identified by an address, said cache comprising a plurality of storage locations and said data processor comprising a cache directory operable to store a physical address indicator for each storage location comprising stored data; a hash value generator operable to generate a generated hash value from at least some of said bits of said address said generated hash value having fewer bits than said address; a buffer operable to store a plurality of hash values relating to said plurality of storage locations within said cache; wherein in response to a request to access said data item said data processor is operable to compare said generated hash value with at least some of said plurality of hash values stored within said buffer and in response to a match to indicate a indicated storage location of said data item; and said data processor is operable to access one of said physical address indicators stored within said cache directory corresponding to said indicated storage location and in response to said accessed physical address indicator not indicating said address said data processor is operable to invalidate said indicated storage location within said cache. | 06-24-2010 |
20100161902 | METHOD, SYSTEM, AND PROGRAM FOR AN ADAPTOR TO READ AND WRITE TO SYSTEM MEMORY - Provided are a method, system, and program for an adaptor to read and write to system memory. A plurality of blocks of data to write to storage are received at an adaptor. The blocks of data are added to a buffer in the adaptor. A determination is made of pages in a memory device and I/O requests are generated to write the blocks in the buffer to the determined pages, wherein two I/O requests are generated to write to one block split between two pages in the memory device. The adaptor executes the generated I/O requests to write the blocks in the buffer to the determined pages in the memory device. | 06-24-2010 |
20100161903 | APPARATUS, METHOD, COMPUTER PROGRAM AND MOBILE TERMINAL FOR PROCESSING INFORMATION - An apparatus for processing information, includes a memory storing a plurality of content items different in type and metadata containing time information of the content items, a cache processor for fetching from the memory the content item and the metadata of the content item to be displayed on a display and storing the fetched content item and the metadata thereof on a cache memory, a display controller for displaying on the display the metadata of the content items from the cache memory arranged in accordance with the time information and a selection operator selecting metadata corresponding to a content item desired to be processed, out of the metadata displayed, and a content processor for fetching from the cache memory a content item corresponding to the metadata selected by the selection operator by referencing the cache memory in response to the selected metadata, and for performing a process responsive to the fetched content item. | 06-24-2010 |
20100174867 | USING DIFFERENT ALGORITHMS TO DESTAGE DIFFERENT TYPES OF DATA FROM CACHE - Provided are a method, system, and article of manufacture for using different algorithms to destage different types of data from cache. A first destaging algorithm is used to destage a first type of data to a storage for a first duration. A second destaging algorithm is used to destage a second type of data to the storage for a second duration. | 07-08-2010 |
20100191909 | Administering Registered Virtual Addresses In A Hybrid Computing Environment Including Maintaining A Cache Of Ranges Of Currently Registered Virtual Addresses - Administering registered virtual addresses in a hybrid computing environment that includes a host computer, an accelerator, the accelerator architecture optimized, with respect to the host computer architecture, for speed of execution of a particular class of computing functions, the host computer and the accelerator adapted to one another for data communications by a system level message passing module, where administering registered virtual addresses includes maintaining a cache of ranges of currently registered virtual addresses, the cache including entries associating a range of currently registered virtual addresses, a handle representing physical addresses mapped to the range of currently registered virtual addresses, and a counter; determining whether to register ranges of virtual addresses in dependence upon the cache of ranges of currently registered virtual addresses; and determining whether to deregister ranges of virtual addresses in dependence upon the cache of ranges of currently registered virtual addresses. | 07-29-2010 |
20100191910 | APPARATUS AND CIRCUITRY FOR MEMORY-BASED COLLECTION AND VERIFICATION OF DATA INTEGRITY INFORMATION - Apparatus and circuitry are provided for supporting collection and/or verification of data integrity information. A circuitry in a storage controller is provided for creating and/or verifying a Data Integrity Block (“DIB”). The circuitry comprises a processor interface for coupling with the processor of the storage controller. The circuitry also comprises a memory interface for coupling with a cache memory of the storage controller. By reading a plurality of Data Integrity Fields (“DIFs”) from the cache memory through the memory interface based on information received from the processor, the DIB is created in that each DIF in the DIB corresponds to a respective data block. | 07-29-2010 |
20100191911 | System-On-A-Chip Having an Array of Programmable Processing Elements Linked By an On-Chip Network with Distributed On-Chip Shared Memory and External Shared Memory - An integrated circuit having an array of programmable processing elements and a memory interface linked by an on-chip communication network. Each processing element includes a plurality of processing cores and a local memory. The memory interface block is operably coupled to external memory and to the on-chip communication network. The memory interface supports accessing the external memory in response to messages communicated from the processing elements of the array over the on-chip communication network. A portion of the local memory for a plurality of the processing elements of the array as well as a portion of the external memory are both allocated to store data shared by a plurality of processing elements of the array during execution of programmed operations distributed thereon. | 07-29-2010 |
20100199043 | METHODS AND MECHANISMS FOR PROACTIVE MEMORY MANAGEMENT - A proactive, resilient and self-tuning memory management system and method that result in actual and perceived performance improvements in memory management, by loading and maintaining data that is likely to be needed into memory, before the data is actually needed. The system includes mechanisms directed towards historical memory usage monitoring, memory usage analysis, refreshing memory with highly-valued (e.g., highly utilized) pages, I/O pre-fetching efficiency, and aggressive disk management. Based on the memory usage information, pages are prioritized with relative values, and mechanisms work to pre-fetch and/or maintain the more valuable pages in memory. Pages are pre-fetched and maintained in a prioritized standby page set that includes a number of subsets, by which more valuable pages remain in memory over less valuable pages. Valuable data that is paged out may be automatically brought back, in a resilient manner. Benefits include significantly reducing or even eliminating disk I/O due to memory page faults. | 08-05-2010 |
20100205375 | METHOD, APPARATUS, AND SYSTEM OF FORWARD CACHING FOR A MANAGED CLIENT - A method, apparatus, and system are disclosed of forward caching for a managed client. A storage module stores a software image on a storage device of a backend server. The backend server provides virtual disk storage on the storage device through a first intermediate network point for a plurality of diskless data processing devices. Each diskless data processing device communicates directly with the first intermediate network point. The storage module caches an image instance of the software image at the first intermediate network point. A tracking module detects an update to the software image on the storage device. The storage module copies the updated software image to the first intermediate network point as an updated image instance. | 08-12-2010 |
20100211741 | Shared Composite Data Representations and Interfaces - Embodiments described herein provide information management features and functionality that can be used to manage information of distinct information sources, but are not so limited. In an embodiment, a computing environment includes a client that can be used to access data from distinct sources and generate a data composition representing aspects of accessed and other data and/or relationships of the distinct sources. In one embodiment, a client can include data composition and conflict resolution presentation features that can be used to manage one or more data compositions and/or source interrelationships. Other embodiments are available. | 08-19-2010 |
20100217934 | METHOD, APPARATUS AND SYSTEM FOR OPTIMIZING IMAGE RENDERING ON AN ELECTRONIC DEVICE - Portable electronic devices typically have reduced computing resources, including reduced screen size. The method, apparatus and system of the present specification provides, amongst other things, an intermediation server configured to access network content that is requested by a portable electronic device and to analyze the content including analyzing images in that content. The intermediation server is further configured to accommodate the computing resources of the portable electronic device as part of fulfilling content requests from the portable electronic device. | 08-26-2010 |
20100217935 | SYSTEM ON CHIP AND ELECTRONIC SYSTEM HAVING THE SAME - An electronic system includes a system on chip (SOC). The SOC includes at least one internal memory that operates selectively as a cache memory or a tightly-coupled memory (TCM). The SOC may include a microprocessor, an internal memory, and a selecting circuit. The selecting circuit may be configured to set the internal memory to one of a TCM mode or a cache memory mode in response to a memory selecting signal. | 08-26-2010 |
20100217936 | SYSTEMS AND METHODS FOR PROCESSING ACCESS CONTROL LISTS (ACLS) IN NETWORK SWITCHES USING REGULAR EXPRESSION MATCHING LOGIC - A network node, such as an Ethernet switch, is configured to monitor packet traffic using regular expressions corresponding to Access Control List (ACL) rules. In one embodiment, the regular expressions are expressed in the form of a state machine. In one embodiment, as packets are passed through the network node, an access control module accesses the packets and traverses the state machine according to certain qualification content of the packets in order to determine if respective packets should be permitted to pass through the network switch. | 08-26-2010 |
20100228917 | DEVICE MANAGEMENT APPARATUS, DEVICE INITIALIZATION METHOD, AND DEVICE SYSTEM - A device management apparatus that executes an initialization processing to a device that stores user data includes a first initialization processing section for executing a first initialization processing in which a progress status of an initialization is notified to another device management apparatus every time when the initialization equivalent to a processing unit of the initialization processing is executed to the device, a second initialization processing section for executing a second initialization processing in which a progress status of an initialization is notified to the another device management apparatus every time when the initialization for the predetermined number of processing units is executed to the device, a monitoring unit for monitoring a status of access to the device and an operation state of the device, and a changeover section for changing over the first initialization processing and the second initialization processing based on a monitoring result. | 09-09-2010 |
20100228918 | CONFIGURABLE LOGIC INTEGRATED CIRCUIT HAVING A MULTIDIMENSIONAL STRUCTURE OF CONFIGURABLE ELEMENTS - Programming of modules which can be reprogrammed during operation is described. Partitioning of code sequences is also described. | 09-09-2010 |
20100241804 | METHOD AND SYSTEM FOR FAST RETRIEVAL OF RANDOM UNIVERSALLY UNIQUE IDENTIFIERS THROUGH CACHING AND PARALLEL REGENERATION - In general, the invention relates to a system that includes a UUID cache and a UUID caching mechanism. The UUID caching mechanism is configured to, using a first thread, monitor the number of UUIDs stored in the UUID cache, determine that the number of UUIDs stored in the UUID cache is less than a first threshold, request a first set of UUIDs from a UUID generator, receive the first set of UUIDs from the UUID generator, and store the first set of UUIDs received from the UUID generator in the UUID cache. The UUID caching mechanism is further configured to provide a second set of UUIDs to a first application using a second thread, where at least one of the UUIDs in the second set of UUIDs is from the first set of UUIDs, and where the first thread and the second thread execute concurrently. | 09-23-2010 |
20100241805 | IMAGE FORMING APPARATUS, AND CONTROL METHOD AND PROGRAM THEREOF - An object attribute is determined with respect to an object and a determination is performed as to whether or not to execute image cache processing in response to the object attribute. By switching processing in accordance with this, execution of time-consuming image specifying processing is kept to a necessary minimum and performance reductions can be avoided. Furthermore, cache registration is avoided for images having low reusability, which achieves improvements in cache usage efficiency and improvements in cache search efficiency, thereby enabling performance to be improved. | 09-23-2010 |
20100241806 | DATA BACKUP METHOD AND INFORMATION PROCESSING APPARATUS - An information processing apparatus includes, a first storage unit, a second storage unit in which data stored in the first storage unit is backed up, and a memory controller that controls data backup operation. The memory controller divides a transfer source storage area into portions, and provides two transfer destination areas, each of the two transfer destination areas being divided into portions, backs up data in a direction from a beginning address of each divided area of the transfer source storage area to an end address thereof in one of the transfer destination areas provided for each divided area of the transfer source storage area, and backs up data in a direction from the end address of each divided area of the transfer source storage area to the beginning address thereof in the other transfer destination storage area. | 09-23-2010 |
20100241807 | VIRTUALIZED DATA STORAGE SYSTEM CACHE MANAGEMENT - Virtual storage arrays consolidate branch data storage at data centers connected via wide area networks. Virtual storage arrays appear to storage clients as local data storage; however, virtual storage arrays actually store data at the data center. The virtual storage arrays overcomes bandwidth and latency limitations of the wide area network by predicting and prefetching storage blocks, which are then cached at the branch location. Virtual storage arrays leverage an understanding of the semantics and structure of high-level data structures associated with storage blocks to predict which storage blocks are likely to be requested by a storage client in the near future. Virtual storage arrays determine the association between requested storage blocks and corresponding high-level data structure entities to predict additional high-level data structure entities that are likely to be accessed. From this, the virtual storage array identifies the additional storage blocks for prefetching. | 09-23-2010 |
20100250850 | PROCESSOR AND METHOD FOR EXECUTING LOAD OPERATION AND STORE OPERATION THEREOF - A processor and a method for executing load operation and store operation thereof are provided. The processor includes a data cache and a store buffer. When executing a store operation, if the address of the store operation is the same as the address of an existing entry in the store buffer, the data of the store operation is merged into the existing entry. When executing a load operation, if there is a memory dependency between an existing entry in the store buffer and the load operation, and the existing entry includes the complete data required by the load operation, the complete data is provided by the existing entry alone. If the existing entry does not include the complete data, the complete data is generated by assembling the existing entry and a corresponding entry in the data cache. | 09-30-2010 |
20100250851 | MULTI-PROCESSOR SYSTEM DEVICE AND METHOD DECLARING AND USING VARIABLES - A method of declaring and using variables includes; determining whether variables are independent variables or common variables, declaring and storing the independent variables in a plurality of data structures respectively corresponding to the plurality of processors, declaring and storing the common variables in a shared memory area, allowing each one of the plurality of processors to simultaneously use the independent variables in a corresponding one of the plurality of data structures, and allowing only one of the plurality of processors at a time to use the common variables in the shared memory area. | 09-30-2010 |
20100250852 | USER TERMINAL APPARATUS AND CONTROL METHOD THEREOF, AS WELL AS PROGRAM - A user terminal apparatus and a control method therefor, which constitutes part of a thin client system which transfers data to a file server and stores the data therein. The system aggregates user data in a file server by controlling writing into a secondary storage device of the user terminal and controlling writing out to an external storage medium, to prevent loss and leakage of confidential information. | 09-30-2010 |
20100262777 | STORAGE APPARATUS AND METHOD FOR ELIMINATING REDUNDANT DATA STORAGE USING STORAGE APPARATUS - A storage apparatus | 10-14-2010 |
20100262778 | Empirically Based Dynamic Control of Transmission of Victim Cache Lateral Castouts - In response to a data request, a victim cache line is selected for castout from a lower level cache, and a target lower level cache of one of the plurality of processing units is selected. A determination is made whether the selected target lower level cache has provided more than a threshold number of retry responses to lateral castout (LCO) commands of the first lower level cache, and if so, a different target lower level cache is selected. The first processing unit thereafter issues a LCO command on the interconnect fabric. The LCO command identifies the victim cache line to be castout and indicates that the target lower level cache is an intended destination of the victim cache line. In response to a successful coherence response to the LCO command, the victim cache line is removed from the first lower level cache and held in the second lower level cache. | 10-14-2010 |
20100262779 | Program And Data Annotation For Hardware Customization And Energy Optimization - Technologies are generally described herein for supporting program and data annotation for hardware customization and energy optimization. A code block to be annotated may be examined and a hardware customization may be determined to support a specified quality of service level for executing the code block with reduced energy expenditure. Annotations may be determined as associated with the determined hardware customization. An annotation may be provided to indicate using the hardware customization while executing the code block. Examining the code block may include one or more of performing a symbolic analysis, performing an empirical observation of an execution of the code block, performing a statistical analysis, or any combination thereof. A data block to be annotated may also be examined. One or more additional annotations to be associated with the data block may be determined. | 10-14-2010 |
20100262780 | APPARATUS AND METHODS FOR RENDERING A PAGE - Aspects relate to apparatus and methods for rending a page on a computing device, such as a web page. The apparatus and methods include receiving a request for a requested instance of a page and determining if the requested instance of the page corresponds to a document object model (DOM) for the page stored in a memory. Further, the apparatus and methods include retrieving a dynamic portion of the DOM corresponding to the requested instance if the requested instance of the page corresponds to the DOM stored in the memory. The dynamic portion may be unique to the requested instance of the page. Moreover, the apparatus and methods include storing the dynamic portion of the DOM corresponding to the requested instance of the page in a relationship with the static portion of the DOM. | 10-14-2010 |
20100274969 | ACTIVE-ACTIVE SUPPORT OF VIRTUAL STORAGE MANAGEMENT IN A STORAGE AREA NETWORK ("SAN") - Methods and apparatuses are provided for active-active support of virtual storage management in a storage area network (“SAN”). When a storage manager (that manages virtual storage volumes) of the SAN receives data to be written to a virtual storage volume from a computer server, the storage manager determines whether the writing request may result in updating a mapping of the virtual storage volume to a storage system. When the writing request does not involve updating the mapping, which happens most of the time, the storage manager simply writes the data to the storage system based on the existing mapping. Otherwise, the storage manager sends an updating request to another storage manager for updating a mapping of the virtual storage volume to a storage volume. Subsequently, the storage manager writes the data to the corresponding storage system based on the mapping that has been updated by the another storage manager. | 10-28-2010 |
20100274970 | Robust Domain Name Resolution - A recursive DNS nameserver system and related domain name resolution techniques are disclosed. The DNS nameservers utilize a local cache having previously retrieved domain name resolution to avoid recursive resolution processes and the attendant DNS requests. If a matching record is found with a valid (not expired) TTL field, the nameserver returns the cached domain name information to the client. If the TTL for the record in the cache has expired and the nameserver is unable to resolve the domain name information using DNS requests to authoritative servers, the recursive DNS nameserver returns to the cache and accesses the resource record having an expired TTL. The nameserver generates a DNS response to the client device that includes the domain name information from the cached resource record. In various embodiments, subscriber information is utilized to resolve the requested domain name information in accordance with user-defined preferences. | 10-28-2010 |
20100281216 | METHOD AND APPARATUS FOR DYNAMICALLY SWITCHING CACHE POLICIES - A method implements a cache-policy switching module in a storage system. The storage system includes a cache memory to cache storage data. The cache memory uses a first cache configuration. The cache-policy switching module emulates the caching of the storage data with a plurality of cache configurations. Upon a determination that one of the plurality of cache configurations performs better than the first cache configuration, the cache-policy switching module automatically applies the better performing cache configuration to the cache memory for caching the storage data. | 11-04-2010 |
20100281217 | SYSTEM AND METHOD FOR PERFORMING ENTITY TAG AND CACHE CONTROL OF A DYNAMICALLY GENERATED OBJECT NOT IDENTIFIED AS CACHEABLE IN A NETWORK - The present invention is directed towards a method and system for modifying by a cache responses from a server that do not identify a dynamically generated object as cacheable to identify the dynamically generated object to a client as cacheable in the response. In some embodiments, such as an embodiment handling HTTP requests and responses for objects, the techniques of the present invention insert an entity tag, or “etag” into the response to provide cache control for objects provided without entity tags and/or cache control information from an originating server. This technique of the present invention provides an increase in cache hit rates by inserting information, such as entity tag and cache control information for an object, in a response to a client to enable the cache to check for a hit in a subsequent request. | 11-04-2010 |
20100293330 | DISPLAYING TRANSITION IMAGES DURING A SLIDE TRANSITION - One or more transition images are displayed during a transition period between a display of slides within a presentation. The displayed transition images include images of different slides that are contained within the presentation. The transition images provide the audience with a glimpse of slides that are displayed within the presentation. For example, the transition images may include images from previous and future slides that are contained within the presentation. The transition images may also be cached in order to more efficiently display the transition images during the transition period. | 11-18-2010 |
20100299479 | OBSCURING MEMORY ACCESS PATTERNS - For each memory location in a set of memory locations associated with a thread, setting an indication associated with the memory location to request a signal if data from the memory location is evicted from a cache; and in response to the signal, reloading the set of memory locations into the cache. | 11-25-2010 |
20100299480 | Method And System Of Executing Stack-based Memory Reference Code - A method and system of executing stack-based memory reference code. At least some of the illustrated embodiments are methods comprising waking a computer system from a reduced power operational state in which a memory controller loses at least some configuration information, executing memory reference code that utilizes a stack (wherein the memory reference code configures the main memory controller), and passing control of the computer system to an operating system. The time between executing a first instruction after waking the computer system and passing control to the operating system takes less than 200 milliseconds. | 11-25-2010 |
20100306469 | PROCESSING METHOD AND APPARATUS - A processing apparatus externally receives a processing request and executes the requested processing. The processing apparatus transmits the result of the processing to a processing request source if a connection to the processing request source is maintained until the requested processing is executed. The processing apparatus stores the result of executing the processing in a memory if the connection to the processing request source is disconnected before the end of the requested processing. The processing apparatus transmits the processing result stored in the memory to the processing request source if the processing requested when the processing request is received is executed but is stored in the memory. | 12-02-2010 |
20100318740 | Method and System for Storing Real Time Values - A <> is inserted between the archiving subsystem (e.g. relational database writing API) and the tag data flow from the acquisition server. Then client data requests must be routed always through the <>. The dynamic data cache module>> is able to manage tag data that is not only coming from real-time acquisition (i.e. keeping the last n values of tag data in the cache) but also <> of data in a different time span. For this usage, the cache will be size-limited and a last recently used (LRU) algorithm may be used to free up space when needed. | 12-16-2010 |
20100325357 | SYSTEMS AND METHODS FOR INTEGRATION BETWEEN APPLICATION FIREWALL AND CACHING - The present invention is directed towards systems and methods for integrating cache managing and application firewall processing in a networked system. In various embodiments, an integrated cache/firewall system comprises an application firewall operating in conjunction with a cache managing system in operation on an intermediary device. In various embodiments, the application firewall processes a received HTTP response to a request by a networked entity serviced by the intermediary device. The application firewall generates metadata from the HTTP response and stores the metadata in cache with the HTTP response. When a subsequent request hits in the cache, the metadata is identified to a user session associated with the subsequent request. In various embodiments, the application firewall can modify a cache-control header of the received HTTP response, and can alter the cookie-setting header of the cached HTTP response. The system and methods can significantly reduce processing time associated with application firewall processing of web content exchanged over a network. | 12-23-2010 |
20100332753 | WAIT LOSS SYNCHRONIZATION - Synchronizing threads on loss of memory access monitoring. Using a processor level instruction included as part of an instruction set architecture for a processor, a read, or write monitor to detect writes, or reads or writes respectively from other agents on a first set of one or more memory locations and a read, or write monitor on a second set of one or more different memory locations are set. A processor level instruction is executed, which causes the processor to suspend executing instructions and optionally to enter a low power mode pending loss of a read or write monitor for the first or second set of one or more memory locations. A conflicting access is detected on the first or second set of one or more memory locations or a timeout is detected. As a result, the method includes resuming execution of instructions. | 12-30-2010 |
20100332754 | System and Method for Caching Multimedia Data - Systems and methods are provided for caching media data to thereby enhance media data read and/or write functionality and performance. A multimedia apparatus, comprises a cache buffer configured to be coupled to a storage device, wherein the cache buffer stores multimedia data, including video and audio data, read from the storage device. A cache manager coupled to the cache buffer, wherein the cache buffer is configured to cause the storage device to enter into a reduced power consumption mode when the amount of data stored in the cache buffer reaches a first level. | 12-30-2010 |
20110010499 | STORAGE SYSTEM, METHOD OF CONTROLLING STORAGE SYSTEM, AND METHOD OF CONTROLLING CONTROL APPARATUS - A storage system including a storage, has a first power supplier for supplying electronic power, a second power supplier for supplying electronic power when the first power supplier not supplying electronic power to the storage system, a cache memory for storing data sent out from a host, a non-volatile memory for storing data stored in the cache memory, and a controller for writing the data stored in the cache memory into the non-volatile memory when the second supplier supplying electronic power to the storage system, for stopping the writing and for deleting data stored in the non-volatile memory so until a free space volume of the non-volatile memory being not less than a volume of the data stored in the cache memory when the first supplier restoring electronic power to the storage system. | 01-13-2011 |
20110010500 | Novel Context Instruction Cache Architecture for a Digital Signal Processor - Improved thrashing aware and self configuring cache architectures that reduce cache thrashing without increasing cache size or degrading cache hit access time, for a DSP. In one example embodiment, this is accomplished by selectively caching only the instructions having a higher probability of recurrence to considerably reduce cache thrashing. | 01-13-2011 |
20110022798 | METHOD AND SYSTEM FOR CACHING TERMINOLOGY DATA - A method for caching terminology data, including steps of: receiving a terminology request; determining that the terminology request is related to at least one uncached terminology concept; retrieving a complete concept set of the terminology concept as a cache unit, wherein the complete concept set includes the terminology concept, all other terminology concepts which are directly correlated or indirectly correlated through a non-transitive relationship to the terminology concept, properties of each terminology concept, and the non-transitive relationship between each terminology concept; retrieving transitive relationship information for the complete concept set, the transitive relationship information at least including identifiers of terminology concepts which are correlated through the transitive relationship to each terminology concept in the complete concept set; and caching the cache unit and the transitive relationship information of the cache unit. A corresponding device caches terminology data. | 01-27-2011 |
20110022799 | METHOD TO SPEED UP ACCESS TO AN EXTERNAL STORAGE DEVICE AND AN EXTERNAL STORAGE SYSTEM - A method to speed up access to an external storage device for accessing to the external storage device comprises the steps of:
| 01-27-2011 |
20110022800 | SYSTEM AND A METHOD FOR SELECTING A CACHE WAY - A method for selecting a cache way, the method includes: selecting an initially selected cache way out of multiple cache ways of a cache module for receiving a data unit; the method being characterized by including: searching, if the initially selected cache way is locked, for an unlocked cache way, out of at least one group of cache ways that are located at predefined offsets from the first cache way. | 01-27-2011 |
20110029734 | Controller Integration - Roughly described, a data processing system comprises a central processing unit and a split network interface functionality, the split network interface functionality comprising: a first sub-unit collocated with the central processing unit and configured to at least partially form a series of network data packets for transmission to a network endpoint by generating data link layer information for each of those packets; and a second sub-unit external to the central processing unit and coupled to the central processing unit via an interconnect, the second sub-unit being configured to physically signal the series of network data packets over a network. | 02-03-2011 |
20110035550 | Sharing Memory Resources of Wireless Portable Electronic Devices - It is not uncommon for two or more wireless-enabled devices to spend most of their time in close proximity to one another. For example, a person may routinely carry a personal digital assistant (PDA) and a portable digital audio/video player, or a cellphone and a PDA, or a smartphone and a gaming device. When it is desirable to increase the memory storage capacity of a first such device, it may be possible to use memory on one or more of the other devices to temporarily store data from the first device. | 02-10-2011 |
20110040938 | ELECTRONIC APPARATUS AND METHOD OF CONTROLLING THE SAME - Disclosed are an electronic apparatus and a method of controlling the same, the electronic apparatus comprising: a nonvolatile memory unit in which an application is stored; a volatile memory unit in which data based on execution of the application is stored; and a controller which stops supplying power to the volatile memory unit when the electronic apparatus is turned off if a remaining capacity of the volatile memory unit reaches a threshold value for initializing the volatile memory unit, and keeps the power supplied to the volatile memory to make the volatile memory unit retain the data based on the execution of the application even when the electronic apparatus is turned off if the remaining capacity does not reach the threshold value. With this, a memory leak that may be generated when using a STR mode can be effectively prevented. | 02-17-2011 |
20110055479 | Thread Compensation For Microarchitectural Contention - A thread (or other resource consumer) is compensated for contention for system resources in a computer system having at least one processor core, a last level cache (LLC), and a main memory. In one embodiment, at each descheduling event of the thread following an execution interval, an effective CPU time is determined. The execution interval is a period of time during which the thread is being executed on the central processing unit (CPU) between scheduling events. The effective CPU time is a portion of the execution interval that excludes delays caused by contention for microarchitectural resources, such as time spent repopulating lines from the LLC that were evicted by other threads. The thread may be compensated for microarchitectural contention by increasing its scheduling priority based on the effective CPU time. | 03-03-2011 |
20110055480 | METHOD FOR PRELOADING CONFIGURATIONS OF A RECONFIGURABLE HETEROGENEOUS SYSTEM FOR INFORMATION PROCESSING INTO A MEMORY HIERARCHY - A method for preloading into a hierarchy of memories, bitstreams representing the configuration information for a reconfigurable processing system including several processing units. The method includes an off-execution step of determining tasks that can be executed on a processing unit subsequently to the execution of a given task. The method also includes, during execution of the given task, computing a priority for each of the tasks that can be executed. The priority depends on information relating to the current execution of the given task. The method also includes, during execution of the given task, sorting the tasks that can be executed in the order of their priorities. The method also includes, during execution of the given task, preloading into the memory, bitstreams representing the information of the configurations for the execution of the tasks that can be executed, while favoring the tasks whose priority is the highest. | 03-03-2011 |
20110055481 | CACHE MEMORY CONTROLLING APPARATUS - A controlling a cache memory includes: a data receiving unit to receive a sensor ID and data detected by the sensor; an attribute information acquiring unit to acquire attribute information corresponding to the sensor ID, from an attribute information memory, the attribute information memory storing the attribute information of the sensor mapped to the sensor ID; a sensor information memory to store information of a storage period, the sensor information memory including a cache memory storing the attribute information; and a cache memory control unit to acquire the attribute information from the attribute information acquiring unit when the attribute information is not stored in the cache memory, and store the acquired attribute information corresponding to the sensor ID in the cache memory during the storage period. | 03-03-2011 |
20110066806 | System and method for memory bandwidth friendly sorting on multi-core architectures - In some embodiments, the invention involves utilizing a tree merge sort in a platform to minimize cache reads/writes when sorting large amounts of data. An embodiment uses blocks of pre-sorted data residing in “leaf nodes” residing in memory storage. A pre-sorted block of data from each leaf node is read from memory and stored in faster cache memory. A tree merge sort is performed on the nodes that are cache resident until a block of data migrates to a root node. Sorted blocks reaching the root node are written to memory storage in an output list until all pre-sorted data blocks have been moved to cache and merged upward to the root. The completed output list in memory storage is a list of the fully sorted data. Other embodiments are described and claimed. | 03-17-2011 |
20110066807 | Protection Against Cache Poisoning - Protecting computers against cache poisoning, including a cache-entity table configured to maintain a plurality of associations between a plurality of data caches and a plurality of entities, where each of the caches is associated with a different one of the entities, and a cache manager configured to receive data that is associated with any of the entities and store the received data in any of the caches that the cache-entity table indicates is associated with the entity, and receive a data request that is associated with any of the entities and retrieve the requested data from any of the caches that the cache-entity table indicates is associated with the requesting entity, where any of the cache-entity table and cache manager are implemented in either of computer hardware and computer software embodied in a computer-readable medium. | 03-17-2011 |
20110066808 | Apparatus, System, and Method for Caching Data on a Solid-State Storage Device - An apparatus, system, and method are disclosed for caching data on a solid-state storage device. The solid-state storage device maintains metadata pertaining to cache operations performed on the solid-state storage device, as well as storage operations of the solid-state storage device. The metadata indicates what data in the cache is valid, as well as information about what data in the nonvolatile cache has been stored in a backing store. A backup engine works through units in the nonvolatile cache device and backs up the valid data to the backing store. During grooming operations, the groomer determines whether the data is valid and whether the data is discardable. Data that is both valid and discardable may be removed during the grooming operation. The groomer may also determine whether the data is cold in determining whether to remove the data from the cache device. The cache device may present to clients a logical space that is the same size as the backing store. The cache device may be transparent to the clients. | 03-17-2011 |
20110066809 | XML PROCESSING DEVICE, XML PROCESSING METHOD, AND XML PROCESSING PROGRAM - Provided is an XML processing device capable of describing, using conventional XML processing language, a method of processing also an asynchronously inputted XML. The XML processing device converts, according to a predetermined rule, the XML inputted asynchronously from outside and outputs the XML. The XML processing device is characterized by including an XML conversion module which performs XML conversion of the XML inputted according to the rule, an output destination interpretation module which interprets an output destination described in the converted XML, and an output distribution module which allows the XML to be outputted to the output destination interpreted by the output destination interpretation module. | 03-17-2011 |
20110072211 | Hardware For Parallel Command List Generation - A method for providing state inheritance across command lists in a multi-threaded processing environment. The method includes receiving an application program that includes a plurality of parallel threads; generating a command list for each thread of the plurality of parallel threads; causing a first command list associated with a first thread of the plurality of parallel threads to be executed by a processing unit; and causing a second command list associated with a second thread of the plurality of parallel threads to be executed by the processing unit, where the second command list inherits from the first command list state associated with the processing unit. | 03-24-2011 |
20110078376 | METHODS AND APPARATUS FOR OBTAINING INTEGRATED CONTENT FROM MULTIPLE NETWORKS - A method and apparatus for obtaining location content from multiple networks is disclosed. The method may comprises: obtaining coarse location content at a wireless communication device (WCD) from a first network using a first protocol, wherein the coarse location content includes information defining locations of geographic coverage regions for one or more second networks which use a second protocol, obtaining WCD location information, determining from the WCD location information and the coarse location content if the WCD is within the geographic coverage region of a second network, accessing the determined second network using the second protocol, receiving from the accessed second network fine location content, and generating an integrated location content item by combining the coarse location content with the fine location content. | 03-31-2011 |
20110078377 | SOCIAL NETWORKING UTILIZING A DISPERSED STORAGE NETWORK - Social networking data is received at the dispersed storage processing unit, the social networking data associated with at least one of a plurality of user devices. Dispersed storage metadata associated with the social networking data is generated. A full record and at least one partial record are generated based on the social networking data and further based on the dispersed storage metadata. The full record is stored in a dispersed storage network. The partial record is pushed to at least one other of the plurality of user devices via the data network. | 03-31-2011 |
20110078378 | METHOD FOR GENERATING PROGRAM AND METHOD FOR OPERATING SYSTEM - An information processing apparatus sequentially selects a function whose execution frequency is high as a selected function that is to be stored in an internal memory, in a source program having a hierarchy structure. The information processing apparatus allocates the selected function to a memory area of the internal memory, allocates a function that is not the selected function and is called from the selected function to an area close to the memory area of the internal memory, and generates an internal load module. The information processing apparatus allocates a remaining function to an external memory coupled to a processor and generates an external load module. Then, a program executed by the processor having the internal memory is generated. By allocating the function with a high execution frequency to the internal memory, it is possible to execute the program at high speed, which may improve performance of a system. | 03-31-2011 |
20110078379 | STORAGE CONTROL UNIT AND DATA MANAGEMENT METHOD - An I/O processor determines whether or not the amount of dirty data on a cache memory exceeds a threshold value and, if the determination is that this threshold value has been exceeded, writes a portion of the dirty data of the cache memory to a storage device. If a power source monitoring and control unit detects a voltage abnormality of the supplied power, the power monitoring and control unit maintains supply of power using power from a battery, so that a processor receives supply of power from the battery and saves the dirty data stored on the cache memory to a non-volatile memory. | 03-31-2011 |
20110087839 | APPARATUSES, METHODS AND SYSTEMS FOR A SMART ADDRESS PARSER - The apparatus, methods and systems for a smart address parser (hereinafter, “SAP”) described herein implement a text parser whereby users may enter a text string, such as manually via an input field. The SAP processes the input address string to extract address elements for storage, display, reporting, and/or use in a wide variety of back-end applications. In various embodiments and implementations, the SAP may facilitate: separation and identification of address components regardless of the order in which they are supplied in the input address string; supplementation of missing address information; correction and/or recognition of misspelled terms, abbreviations, alternate names, and/or the like variants of address elements; recognition of unique addresses based on minimal but sufficient input identifiers; and/or the like. | 04-14-2011 |
20110087840 | EFFICIENT LINE AND PAGE ORGANIZATION FOR COMPRESSION STATUS BIT CACHING - One embodiment of the present invention sets forth a technique for performing a memory access request to compressed data within a virtually mapped memory system comprising an arbitrary number of partitions. A virtual address is mapped to a linear physical address, specified by a page table entry (PTE). The PTE is configured to store compression attributes, which are used to locate compression status for a corresponding physical memory page within a compression status bit cache. The compression status bit cache operates in conjunction with a compression status bit backing store. If compression status is available from the compression status bit cache, then the memory access request proceeds using the compression status. If the compression status bit cache misses, then the miss triggers a fill operation from the backing store. After the fill completes, memory access proceeds using the newly filled compression status information. | 04-14-2011 |
20110099332 | METHOD AND SYSTEM OF OPTIMAL CACHE ALLOCATION IN IPTV NETWORKS - In an IPTV network, one or more caches may be provided at the network nodes for storing video content in order to reduce bandwidth requirements. Cache functions such as cache effectiveness and cacheability may be defined and optimized to determine the optimal size and location of cache memory and to determine optimal partitioning of cache memory for the unicast services of the IPTV network. | 04-28-2011 |
20110107030 | SELF-ORGANIZING METHODOLOGY FOR CACHE COOPERATION IN VIDEO DISTRIBUTION NETWORKS - A content distribution network (CDN) comprising content storage nodes (CSNs) or caches having storage space that preferentially stores more popular content objects. | 05-05-2011 |
20110113195 | SYSTEMS AND METHODS FOR AVOIDING PERFORMANCE DEGRADATION DUE TO DISK FRAGMENTATION IN A NETWORK CACHING DEVICE - Storage space on one or more hard disks of a network caching appliance is divided into a plurality S of stripes. Each stripe is a physically contiguous section of the disk(s), and is made up of a plurality of sectors. Content, whether in the form of objects or otherwise (e.g., byte-cache stream information), is written to the stripes one at a time, and when the entire storage space has been written the stripes are recycled as a whole, one at a time. In the event of a cache hit, if the subject content is stored on an oldest D ones of the stripes, the subject content is rewritten to a currently written stripe, where 1≦D≦(S−1). | 05-12-2011 |
20110113196 | AVOIDING MEMORY ACCESS LATENCY BY RETURNING HIT-MODIFIED WHEN HOLDING NON-MODIFIED DATA - A microprocessor is configured to communicate with other agents on a system bus and includes a cache memory and a bus interface unit coupled to the cache memory and to the system bus. The bus interface unit receives from another agent coupled to the system bus a transaction to read data from a memory address, determines whether the cache memory is holding the data at the memory address in an exclusive state (or a shared state in certain configurations), and asserts a hit-modified signal on the system bus and provides the data on the system bus to the other agent when the cache memory is holding the data at the memory address in an exclusive state. Thus, the delay of an access to the system memory by the other agent is avoided. | 05-12-2011 |
20110113197 | QUEUE ARRAYS IN NETWORK DEVICES - A queue descriptor including a head pointer pointing to the first element in a queue and a tail pointer pointing to the last element in the queue is stored in memory. In response to a command to perform an enqueue or dequeue operation with respect to the queue, fetching from the memory to a cache only one of either the head pointer or tail pointer and returning to the memory from the cache portions of the queue descriptor modified by the operation. | 05-12-2011 |
20110119444 | ADAPTIVE CACHING OF DATA - Data access is facilitated by employing local caches and an adaptive caching strategy. Specific data is stored in each local cache and consistency is maintained between the caches. To maintain consistency, adaptive caching structures are used. The members of an adaptive caching structure are selected based on a sharing context, such as those members having a chosen association identifier or those members not having the chosen association identifier. | 05-19-2011 |
20110138123 | Managing Data Storage as an In-Memory Database in a Database Management System - System, method, computer program product embodiments and combinations and sub-combinations thereof for managing data storage as an in-memory database in a database management system (DBMS) are provided. In an embodiment, a specialized database type is provided as a parameter of a native DBMS command. A database hosted entirely in-memory of the DBMS is formed when the specialized database type is specified. | 06-09-2011 |
20110145498 | INSTRUMENTATION OF HARDWARE ASSISTED TRANSACTIONAL MEMORY SYSTEM - Monitoring performance of one or more architecturally significant processor caches coupled to a processor. The methods include executing an application on one or more processors coupled to one or more architecturally significant processor caches, where the application utilizes the architecturally significant portions of the architecturally significant processor caches. The methods further include at least one of generating metrics related to performance of the architecturally significant processor caches; implementing one or more debug exceptions related to performance of the architecturally significant processor caches; or implementing one or more transactional breakpoints related to performance of the architecturally significant processor caches as a result of utilizing the architecturally significant portions of the architecturally significant processor caches. | 06-16-2011 |
20110145499 | ASYNCHRONOUS FILE OPERATIONS IN A SCALABLE MULTI-NODE FILE SYSTEM CACHE FOR A REMOTE CLUSTER FILE SYSTEM - Asynchronous file operations in a scalable multi-node file system cache for a remote cluster file system, is provided. One implementation involves maintaining a scalable multi-node file system cache in a local cluster file system, and caching local file data in the cache by fetching file data on demand from the remote cluster file system into the cache over the network. The local file data corresponds to file data in the remote cluster file system. Local file information is asynchronously committed from the cache to the remote cluster file system over the network. | 06-16-2011 |
20110145500 | SEMICONDUCTOR DEVICE AND DATA PROCESSING SYSTEM - A high-speed, low-cost data processing system capable of ensuring expandability of memory capacity and having excellent usability while keeping constant latency is provided. The data processing system is configured to include a data processing device, a volatile memory, and a non-volatile memory. As the data processing device, the volatile memory, and the non-volatile memory are connected in series and the number of connection signals are reduced, the speed is increased while keeping expandability of memory capacity. The data processing device measures latency and performs a latency correcting operation to keep the latency constant. When data in the non-volatile memory is transferred to the volatile memory, error correction is performed to improve reliability. The data processing system formed of these plurality of chips is configured as a data processing system module in which the chips are disposed so as to be multilayered each other and are connected by a ball grid array (BGA) or a technology of wiring these chips. | 06-16-2011 |
20110153935 | NUMA-AWARE SCALING FOR NETWORK DEVICES - The present disclosure describes a method and apparatus for network traffic processing in a non-uniform memory access architecture system. The method includes allocating a Tx/Rx Queue pair for a node, the Tx/Rx Queue pair allocated in a local memory of the node. The method further includes routing network traffic to the allocated Tx/Rx Queue pair. The method may include designating a core in the node for network traffic processing. Of course, many alternatives, variations and modifications are possible without departing from this embodiment. | 06-23-2011 |
20110153936 | Aggregate Symmetric Multiprocessor System - An aggregate symmetric multiprocessor (SMP) data processing system includes a first SMP computer including at least first and second processing units and a first system memory pool and a second SMP computer including at least third and fourth processing units and second and third system memory pools. The second system memory pool is a restricted access memory pool inaccessible to the fourth processing unit and accessible to at least the second and third processing units, and the third system memory pool is accessible to both the third and fourth processing units. An interconnect couples the second processing unit in the first SMP computer for load-store coherent, ordered access to the second system memory pool in the second SMP computer, such that the second processing unit in the first SMP computer and the second system memory pool in the second SMP computer form a synthetic third SMP computer. | 06-23-2011 |
20110153937 | SYSTEMS AND METHODS FOR MAINTAINING TRANSPARENT END TO END CACHE REDIRECTION - The present disclosure presents systems and methods for maintaining original source and destination IP addresses of a request while performing intermediary cache redirection. An intermediary receives a request from a client destined to a server identifying a client IP address as a source IP address and a server IP address as a destination IP address. The intermediary transmits the request to a cache server, the request maintaining original IP addresses and identifying a MAC address of the cache server as the destination MAC address. The intermediary receives the request from the cache server responsive to a cache miss, the received request maintaining the original source and destination IP addresses. The intermediary identifying that the third request is coming from the cache server via one or more data link layer properties of the third transport layer connection. The intermediary transmits to the server the request identifying the client IP address as the source IP address and the server IP address as the destination IP address. | 06-23-2011 |
20110153938 | SYSTEMS AND METHODS FOR MANAGING STATIC PROXIMITY IN MULTI-CORE GSLB APPLIANCE - The present invention is directed towards systems and methods for providing static proximity load balancing via a multi-core intermediary device. An intermediary device providing global server load balancing identifies a size of a location database comprising static proximity information. The intermediary device stores the location database to an external storage of the intermediary device responsive to determining the size of the location database is greater than a predetermined threshold. A first packet processing engine on the device receives a domain name service request for a first location, determines that proximity information for the first location is not stored in a first memory cache, transmits a request to a second packet processing engine for proximity information of the first location, and transmits a request to the external storage for proximity information of the first location responsive to the second packet processing engine not having the proximity information. | 06-23-2011 |
20110153939 | SEMICONDUCTOR DEVICE, CONTROLLER ASSOCIATED THEREWITH, SYSTEM INCLUDING THE SAME, AND METHODS OF OPERATION - In one embodiment, the semiconductor device includes a data control unit configured to selectively process data for writing to a memory. The data control unit is configured to enable a processing function from a group of processing functions based on a mode register command during a write operation, the group of processing functions including at least three processing functions. The enabled processing function may be performed based on a signal received over a single pin associated with the group of processing functions. In another embodiment, the semiconductor device includes a data control unit configured to process data read from a memory. The data control unit is configured to enable a processing function from a group of processing functions based on a mode register command during a read operation. Here, the group of processing functions including at least two processing functions. | 06-23-2011 |
20110153940 | METHOD AND APPARATUS FOR COMMUNICATING DATA BETWEEN PROCESSORS IN MOBILE TERMINAL - A data communication method between processors in a portable terminal and an apparatus thereof are provided. The method includes storing data to be transmitted from a first processor to a second processor in a transmission buffer, determining a size of a free space in a shared memory, sequentially transmitting the data stored in the transmission buffer to the shared memory in units of the size of the free space to the shared memory, and reading out the data transmitted to the shared memory and storing the read data in a reception buffer by a second processor. | 06-23-2011 |
20110161584 | SYSTEM AND METHOD FOR INQUIRY CACHING IN A STORAGE AREA NETWORK - A system and method for servicing an inquiry command from a host device requesting inquiry data about a sequential device on a storage area network. The inquiry data may be cached by a circuitry coupled to the host device and the sequential device. The circuitry may reside in a router. In some embodiments, depending upon whether the sequential device is available to process the inquiry command, the circuitry may forward the inquiry command to the sequential device or process the inquiry command itself, utilizing a cached version of the inquiry data. The cached version may include information indicating that the sequential device is not available. In some embodiments, regardless whether the sequential device is available, the circuitry may process the inquiry command and return the inquiry data from a cache memory. | 06-30-2011 |
20110167222 | UNBOUNDED TRANSACTIONAL MEMORY SYSTEM AND METHOD - An unbounded transactional memory system which can process overflow data. The unbounded transactional memory system may include a host processor, a memory, and a memory processor. The host processor may include an execution unit to perform a transaction, and a cache to temporarily store data. The memory processor may store overflow data in overflow storage included in the memory in response to an overflow event in which the overflow data is generated in the cache during the transaction. | 07-07-2011 |
20110173390 | STORAGE MANAGEMENT METHOD AND STORAGE MANAGEMENT SYSTEM - There is provided a storage management system capable of utilizing division management with enhanced flexibility and of enhancing security of the entire system, by providing functions by program products in each division unit of a storage subsystem. The storage management system has a program-product management table stored in a shared memory in the storage subsystem and showing presence or absence of the program products, which provide management functions of respective resources to respective SLPRs. At the time of executing the management functions by the program products in the SLPRs of users in accordance with instructions from the users, the storage management system is referred to and execution of the management function having no program product is restricted. | 07-14-2011 |
20110191539 | Coprocessor session switching - A data processing apparatus is provided, configured to carry out data processing operations on behalf of a main data processing apparatus, comprising a coprocessor core configured to perform the data processing operations and a reset controller configured to cause the coprocessor core to reset. The coprocessor core performs its data processing in dependence on current configuration data stored therein, the current configuration data being associated with a current processing session. The reset controller is configured to receive pending configuration data from the main data processing apparatus, the pending configuration data associated with a pending processing session, and to store the pending configuration data in a configuration data queue. The reset controller is configured, when the coprocessor core resets, to transfer the pending configuration data from the configuration data queue to be stored in the coprocessor core, replacing the current configuration data. | 08-04-2011 |
20110191540 | PROCESSING READ AND WRITE REQUESTS IN A STORAGE CONTROLLER - Provided are a method, system, and computer program product for processing read and write requests in a storage controller. A host adaptor in the storage controller receives a write request from a host system for a storage address in a storage device. The host adaptor sends write information indicating the storage address updated by the write request to a device adaptor in the storage controller. The host adaptor writes the write data to a cache in the storage controller. The device adaptor indicates the storage address indicated in the write information to a modified storage address list stored in the device adaptor, wherein the modified storage address list indicates modified data in the cache for storage addresses in the storage device. | 08-04-2011 |
20110197028 | Channel Controller For Multi-Channel Cache - Disclosed herein is a channel controller for a multi-channel cache memory, and a method that includes receiving a memory address associated with a memory access request to a main memory of a data processing system; translating the memory address to form a first access portion identifying at least one partition of a multi-channel cache memory, and at least one further access portion, where the at least one partition includes at least one channel; and applying the at least one further access portion to the at least one channel of the multi-channel cache memory. | 08-11-2011 |
20110197029 | HARDWARE ACCELERATION OF A WRITE-BUFFERING SOFTWARE TRANSACTIONAL MEMORY - A method and apparatus for accelerating a software transactional memory (STM) system is described herein. Annotation field are associated with lines of a transactional memory. An annotation field associated with a line of the transaction memory is initialized to a first value upon starting a transaction. In response to encountering a read operation in the transaction, then annotation field is checked. If the annotation field includes a first value, the read is serviced from the line of the transaction memory without having to search an additional write space. A second and third value in the annotation field potentially indicates whether a read operation missed the transactional memory or a tentative value is stored in a write space. Additionally, an additional bit in the annotation field, may be utilized to indicate whether previous read operations have been logged, allowing for subsequent redundant read logging to be reduced. | 08-11-2011 |
20110202724 | IOMMU Architected TLB Support - Embodiments allow a smaller, simpler hardware implementation of an input/output memory management unit (IOMMU) having improved translation behavior that is independent of page table structures and formats. Embodiments also provide device-independent structures and methods of implementation, allowing greater generality of software (fewer specific software versions, in turn reducing development costs). | 08-18-2011 |
20110202725 | SOFTWARE-ACCESSIBLE HARDWARE SUPPORT FOR DETERMINING SET MEMBERSHIP - A method and processor supporting architected instructions for tracking and determining set membership, such as by implementing Bloom filters. The apparatus includes storage arrays (e.g., registers) and an execution core configured to store an indication that a given value is a member of a set, including by executing an architected instruction having an operand specifying the given value, wherein executing comprises hashing applying a hash function to the value to determine an index into one of the storage arrays and setting a bit of the storage array corresponding to the index. An architected query instruction is later executed to determine if a query value is not a member of the set, including by applying the hash function to the query value to determine an index into the storage array and determining whether a bit at the index of the storage array is set. | 08-18-2011 |
20110219187 | CACHE DIRECTORY LOOKUP READER SET ENCODING FOR PARTIAL CACHE LINE SPECULATION SUPPORT - In a multiprocessor system, with conflict checking implemented in a directory lookup of a shared cache memory, a reader set encoding permits dynamic recordation of read accesses. The reader set encoding includes an indication of a portion of a line read, for instance by indicating boundaries of read accesses. Different encodings may apply to different types of speculative execution. | 09-08-2011 |
20110225366 | Dual-Mode, Dual-Display Shared Resource Computing - A dual-mode, dual-display shared resource computing (SRC) device is usable to stream SRC content from a host SRC device while in an on-line mode and maintain functionality with the content during an off-line mode. Such remote SRC devices can be used to maintain multiple user-specific caches and to back-up cached content for multi-device systems. | 09-15-2011 |
20110225367 | MEMORY CACHE DATA CENTER - A data center system includes a memory cache coupled to a data center controller. The memory cache includes volatile memory and stores data that is persisted in a database in a different data center system that is located remotely from the data center system rather than in the first data center system. The data center controller reads data from the memory cache and writes data to the memory cache. | 09-15-2011 |
20110225368 | Apparatus and Method For Context-Aware Mobile Data Management - A context of a mobile device is determined. A context preference of a user associated with the mobile device is determined. The context of the mobile device and the user context preference is transmitted to another node and responsively returned data is received. Available free space in the mobile device is determined. All data whose timestamp is within a predetermined threshold is cached. The data is cached in at least a portion of the free space. | 09-15-2011 |
20110231610 | MEMORY SYSTEM - According to one embodiment, a free blocks included in a nonvolatile semiconductor memory are classified into a plurality of free block management lists. When a free block is acquired at normal priority, the free block is acquired from the free block management list in which a number of free blocks is larger than a first threshold. When a free block is acquired at high priority, the free block is acquired from the free block management list irrespective of the first threshold. | 09-22-2011 |
20110231611 | STORAGE APPARATUS AND CACHE CONTROL METHOD - Optimizing cache-resident area where cache residence control in units of LUs is employed to a storage apparatus that virtualizes the capacity by acquiring only a cache area of a size that is the same as the physical capacity assigned to the LU. An LU is a logical space resident in cache memory is configured by a set of pages acquired by dividing a pool volume as a physical space created by using a plurality of storage devices in a predetermined size. When the LU to be resident in the cache memory is created, a capacity corresponding to the size of the LU is not initially acquired in the cache memory, a cache capacity that is the same as the physical capacity allocated to a new page is acquired in the cache memory each time when the page is newly allocated, and the new page is resident in the cache memory. | 09-22-2011 |
20110238914 | STORAGE APPARATUS AND DATA PROCESSING METHOD FOR THE SAME - The present invention aims for efficient use of storage capacity in a storage system by reducing the amount of time taken for processing including removing redundancy and data compression executed with respect to transferred data. | 09-29-2011 |
20110238915 | STORAGE SYSTEM - A switch device includes interfaces connected to a host, a first storage device, and a second storage device having a cache memory, and a processor executing receiving a copy command indicating to copy target data stored in the first storage device to the second storage device from the host, transmitting a reading out command indicating to read out the target data stored in the first storage device corresponding to the copy command, receiving the target data corresponding to the transmitted reading out command from the first storage device, and transmitting, to the second storage device, a writing command for writing the target data and release information indicating that the target data is releasable from the cache memory. | 09-29-2011 |
20110246719 | PROVISIONING A DISK OF A CLIENT FOR LOCAL CACHE - Embodiments provide systems, methods, apparatuses and computer program products configured to provide alternative desktop computing solutions. Embodiments generally provide client devices configured with a local cache storing a common base image, with access to a user overlay on a remote storage device. Embodiments provide methods for provisioning a local disk of a client for use as the local cache with minimal IT administrator input. | 10-06-2011 |
20110252199 | Data Placement Optimization Using Data Context Collected During Garbage Collection - Mechanisms are provided for data placement optimization during runtime of a computer program. The mechanisms detect cache misses in a cache of the data processing system and collect cache miss information for objects of the computer program. Data context information is generated for an object in an object access sequence of the computer program. The data context information identifies one or more additional objects accessed as part of the object access sequence in association with the object. The cache miss information is correlated with the data context information of the object. Data placement optimization is performed on the object, in the object access sequence, with which the cache miss information is associated. The data placement optimization places connected objects in the object access sequence in close proximity to each other in a memory structure of the data processing system. | 10-13-2011 |
20110258391 | APPARATUS, SYSTEM, AND METHOD FOR DESTAGING CACHED DATA - An apparatus, system, and method are disclosed for destaging cached data. A controller detects one or more write requests to store data in a backing store. The cache controller sends the write requests to a storage controller for a nonvolatile solid-state storage device. The storage controller receives the write requests and caches the data associated with the write requests in the nonvolatile solid-state storage device by appending the data to a log of the nonvolatile solid-state storage device. The log includes a sequential, log-based structure preserved in the nonvolatile solid-state storage device. The cache controller receives at least a portion of the data from the storage controller in a cache log order and destages the data to the backing store in the cache log order. The cache log order comprises an order in which the data was appended to the log of the nonvolatile solid-state storage device. | 10-20-2011 |
20110258392 | METHOD AND SYSTEM FOR PROVIDING DIGITAL RIGHTS MANAGEMENT FILES USING CACHING - A method for providing DRM files using caching includes identifying DRM files to be displayed in a file list in response to a request, decoding a number of first DRM files from among the identified DRM files and caching the first DRM files in a first memory space, and reading the first DRM files in the first memory space in response to the request. Then, a system displays the first DRM files as a file list in a display area. The second DRM files from among the identified DRM files other than the first DRM files are not initially decoded, and file data related to the second DRM files are cached in a second memory space. DRM files from among the second DRM files are subsequently decoded in response to a subsequent command. | 10-20-2011 |
20110264859 | MEMORY SYSTEM - A memory system according to an embodiment of the present invention comprises: a data managing unit | 10-27-2011 |
20110271055 | SYSTEM AND METHOD FOR LOW-LATENCY DATA COMPRESSION/DECOMPRESSION - A compression technique includes storing respective fixed-size symbols for each of a plurality of words in a data block, e.g., a cache line, into a symbol portion of a compressed data block, e.g., a compressed cache line, where each of the symbols provides information about a corresponding one of the words in the data block. Up to a first plurality of data segments are stored in a data portion of the compressed data block, each data segment corresponds to a unique one of the symbols in the compressed data block and a unique one of the words in the cache line. Up to a second plurality of dictionary entries are stored in the data portion of the compressed cache line. The dictionary entries can correspond to multiple ones of the symbols. | 11-03-2011 |
20110276760 | NON-COMMITTING STORE INSTRUCTIONS - Techniques relating to a processor that supports a non-committing store instruction that is executable during a scouting thread to provide data to a subsequently executed load instruction. The processor may include a memory access unit configured to perform an instance of the non-committing store instruction by storing a value in an entry of a store buffer without committing the instance of the non-committing store instruction. In response to subsequently receiving an instance of a load instruction of the scouting thread that specifies a load from the memory address, the memory access unit is configured to perform the instance of the load instruction by retrieving the value. The memory access unit may retrieve the value from the store buffer or from a cache of the processor. | 11-10-2011 |
20110276761 | ACCELERATING SOFTWARE LOOKUPS BY USING BUFFERED OR EPHEMERAL STORES - A method and apparatus for accelerating lookups in an address based table is herein described. When an address and value pair is added to an address based table, the value is privately stored in the address to allow for quick and efficient local access to the value. In response to the private store, a cache line holding the value is transitioned to a private state, to ensure the value is not made globally visible. Upon eviction of the privately held cache line, the information is not written-back to ensure locality of the value. In one embodiment, the address based table includes a transactional write buffer to hold addresses, which correspond to tentatively updated values during a transaction. Accesses to the tentative values during the transaction may be accelerated through use of annotation bits and private stores as discussed herein. Upon commit of the transaction, the values are copied to the location to make the updates globally visible. | 11-10-2011 |
20110283065 | Information Processing Apparatus and Driver - According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. The driver is configured to write data into the second storage and read data from the second storage using the first external storage as a cache for the second storage. The driver is further configured to reserve a cache area in the memory, between a buffer area and the first external storage, and between a buffer area and the second storage. | 11-17-2011 |
20110289275 | Fast Hit Override - In one embodiment, a cache comprises a tag memory and a comparator. The tag memory is configured to store tags of cache blocks stored in the cache, and is configured to output at least one tag responsive to an index corresponding to an input address. The comparator is coupled to receive the tag and a tag portion of the input address, and is configured to compare the tag to the tag portion to generate a hit/miss indication. The comparator comprises dynamic circuitry, and is coupled to receive a control signal which, when asserted, is defined to force a first result on the hit/miss indication independent of whether or not the tag portion matches the tag. The comparator also comprises circuitry coupled to receive the control signal and configured to inhibit a state change on an output of the dynamic circuitry during an evaluate phase of the dynamic circuitry to produce the first result responsive to an assertion of the control signal. | 11-24-2011 |
20110296107 | Latency-Tolerant 3D On-Chip Memory Organization - A mechanism is provided within a 3D stacked memory organization to spread or stripe cache lines across multiple layers. In an example organization, a 128B cache line takes eight cycles on a 16B-wide bus. Each layer may provide 32B. The first layer uses the first two of the eight transfer cycles to send the first 32B. The next layer sends the next 32B using the next two cycles of the eight transfer cycles, and so forth. The mechanism provides a uniform memory access. | 12-01-2011 |
20110296108 | Methods to Estimate Existing Cache Contents for Better Query Optimization - A method for estimating contents of a cache determines table descriptors referenced by a query, and scans each page header stored in the cache for the table descriptor. If the table descriptor matches any of the referenced table descriptors, a page count value corresponding to the matching referenced table descriptor is increased. Alternatively, a housekeeper thread periodically performs the scan and stores the page count values in a central lookup table accessible by threads during a query run. Alternatively, each thread independently maintains a hash table with page count entries corresponding to table descriptors for each table in the database system. A thread increases or decreases the page count value when copying or removing pages from the cache. A page count value for each referenced table descriptor is determined from a sum of the values in the hash tables. A master thread performs bookkeeping and prevents hash table overflows. | 12-01-2011 |
20110296109 | CACHE CONTROL FOR ADAPTIVE STREAM PLAYER - An adaptive stream player that has control over whether a retrieved stream is cached in a local stream cache. For at least some of the stream portions requested by the player, before going out over the network, a cache control component first determines whether or not an acceptable version of the stream portion is present in a stream cache. If there is an acceptable version in the stream cache, that version is provided rather than having to request the stream portion of the network. For stream portions received over the network, the cache control component decides whether or not to cache that stream portion. Thus, the cache control component allows the adaptive stream player to work in offline scenarios and also allows the adaptive stream player to have rewind, pause, and other controls that use cached content. | 12-01-2011 |
20110296110 | Critical Word Forwarding with Adaptive Prediction - In an embodiment, a system includes a memory controller, processors and corresponding caches. The system may include sources of uncertainty that prevent the precise scheduling of data forwarding for a load operation that misses in the processor caches. The memory controller may provide an early response that indicates that data should be provided in a subsequent clock cycle. An interface unit between the memory controller and the caches/processors may predict a delay from a currently-received early response to the corresponding data, and may speculatively prepare to forward the data assuming that it will be available as predicted. The interface unit may monitor the delays between the early response and the forwarding of the data, or at least the portion of the delay that may vary. Based on the measured delays, the interface unit may modify the subsequently predicted delays. | 12-01-2011 |
20110296111 | INTERFACE FOR ACCESSING AND MANIPULATING DATA - A system and method for an interface for accessing and manipulating data to allow access to data on a storage module on a network based system. The data is presented as a virtual disk for the local system through a hardware interface that emulates a disk interface. The system and method incorporates features to improve the retrieval and storage performance of frequently access data such as partition information, operating system files, or file system related information through the use of local caching and difference calculations. This system and method may be used to replace some, or all, of the fixed storage in a device. The system and method may provide both online and offline access to the data. | 12-01-2011 |
20110302371 | METHOD OF OPERATING A COMPUTING DEVICE TO PERFORM MEMOIZATION - This invention relates to a method ( | 12-08-2011 |
20110307661 | MULTI-PROCESSOR CHIP WITH SHARED FPGA EXECUTION UNIT AND A DESIGN STRUCTURE THEREOF - An integrated circuit chip having plural processors with a shared field programmable gate array (FPGA) unit, a design structure thereof, and method for allocating the shared FPGA unit. A method includes storing a plurality of data that define a plurality of configurations of a field programmable gate array (FPGA), wherein the FPGA is arranged in the execution pipeline of at least one processor; selecting one of the plurality of data; and programming the FPGA based on the selected one of the plurality of data. | 12-15-2011 |
20110314223 | SYSTEM FOR PROTECTING AGAINST CACHE RESTRICTION VIOLATIONS IN A MEMORY - An apparatus comprising a plurality of tag circuits, a plurality of compare circuits and a processing circuit. The plurality of tag circuits may each be configured to store memory mapping data. The plurality of compare circuits may each be configured to generate a respective compare result in response to a match between the memory mapping data of a respective one of the tag circuits and a respective one of a plurality of tag fields. The processing circuit may be configured to receive each of the compare results from the plurality of compare circuits. The processing circuit may also be configured to count occurrences of the matches. If more than one match is identified within a predetermined time, the processing circuit may invalidate the memory mapping data and the tag field. If more than one match is identified within a predetermined time, the processing circuit may also re-fetch the memory mapping data. | 12-22-2011 |
20110314224 | Apparatus and method for handling access operations issued to local cache structures within a data processing apparatus - An apparatus and method are provided for handling access operations issued to local cache structures within a data processing apparatus. The data processing apparatus comprises a plurality of processing units each having a local cache structure associated therewith. Shared access coordination circuitry is also provided for coordinating the handling of shared access operations issued to any of the local cache structures. For a shared access operation, the access control circuitry associated with the local cache structure to which that shared access operation is issued will perform a local access operation to that local cache structure, and in addition will issue a shared access signal to the shared access coordination circuitry. For a local access operation, the access control circuitry would normally perform a local access operation on the associated local cache structure, and not notify the shared access coordination circuitry. However, if an access operation extension value is set, then the access control circuitry treats such a local access operation as a shared access operation. Such an approach ensures correction operation even after an operating system and/or an application program are migrated from one processing unit to another. | 12-22-2011 |
20110320714 | MAINFRAME STORAGE APPARATUS THAT UTILIZES THIN PROVISIONING - Each actual page inside a pool is configured from a plurality of actual tracks, and each virtual page inside a virtual volume is configured from a plurality of virtual tracks. A storage control apparatus of a mainframe system has management information that includes information denoting a track in which there exists a user record, which is a record including user data (the data used by a host apparatus of a mainframe system). Based on the management information, a controller identifies an actual page that is configured only from tracks that do not comprise the user record, and cancels the allocation of the identified actual page to the virtual page. | 12-29-2011 |
20110320715 | IDENTIFYING TRENDING CONTENT ITEMS USING CONTENT ITEM HISTOGRAMS - Within a content item set, particular content items may be identified as trending, based on changes in a frequency of references to the content items. For example, users of a social network may reference web resources by posting the uniform resource locators (URLs) thereof in messages, and trending web resources may be identified by detecting changes in the frequencies of such references. These trends may be tracked by counting such references in content item histograms, and by computing trend scores at the time of detecting each reference to a content item. Trending content items may then be identified at a second time by comparing the trend scores after decaying the trend scores of respective content items, based on the period between the second time and the last reference time of the last detected reference to the content item. | 12-29-2011 |
20110320716 | LOADING AND UNLOADING A MEMORY ELEMENT FOR DEBUG - A method of debugging a memory element is provided. The method includes initializing a line fetch controller with at least one of write data and read data; utilizing at least two separate clocks for performing at least one of write requests and read requests based on the at least one of the write data and the read data; and debugging the memory element based on the at least one of write requests and read requests. | 12-29-2011 |
20110320717 | STORAGE CONTROL APPARATUS, STORAGE SYSTEM AND METHOD - A storage control apparatus includes a memory configured to store access management information concerning access from a host to each of a plurality of logical volumes, and a controller configured to refer to the access management information read from the memory, when receiving an entirety of updated data from the host, to set a write mode for data transfer from each of the plurality of logical volumes to the corresponding physical volume on the basis of the access management information to one of a difference data write mode in which difference data indicating a difference between an entirety of data stored in a storage apparatus and the entirety of updated data is written into a storage apparatus and an entire data write mode in which the entirety of updated data is written into the storage apparatus. | 12-29-2011 |
20110320718 | READING OR WRITING TO MEMORY - To increase the efficiency of a running application, it is determined whether using a cache or directly a storage is more efficient block size-specifically; and the determined memory type is used for a data stream having a corresponding block size. | 12-29-2011 |
20120005429 | REUSING STYLE SHEET ASSETS - In a first embodiment of the present invention, a method is provided comprising: parsing a document, wherein the document contains at least one reference to a style sheet; for each referenced style sheet: determining if a ruleset corresponding to the referenced style sheet is contained in a first local cache; if the ruleset corresponding to the style sheet is contained in the first local cache; if the referenced style sheet is not contained in the first local cache, parsing the referenced style sheet to derive a ruleset; and applying the ruleset(s) to the document to derive a layout for displaying the document. | 01-05-2012 |
20120017045 | MULTI-RESOLUTION CACHE MONITORING - Multi-resolution cache monitoring devices and methods are provided. Multi-resolution cache devices illustratively have a cache memory, an interface, an information unit, and a processing unit. The interface receives a request for data that may be included in the cache memory. The information unit has state information for the cache memory. The state information is organized in a hierarchical structure. The process unit searches the hierarchical structure for the requested data. | 01-19-2012 |
20120017046 | UNIFIED MANAGEMENT OF STORAGE AND APPLICATION CONSISTENT SNAPSHOTS - A storage management application of a storage array is operable to create a new volume on the storage device array, and to automatically configure, responsive to user selection of an application protection profile, data protection services for application data to be stored on the volume, and/or, responsive to user selection of an application performance profile, application specific performance parameters. The application protection profile specifies scheduling and replication of snapshots for application data to be stored on the volume, and the application performance profile specifies performance parameters such as setting a block size, enabling or modifying a data caching algorithm, turning on or modifying data compression, etc. The scheduling, replication and/or application performance may be managed by a daemon associated with the storage management application which communicates with an agent associated with an application server on which the application executes. | 01-19-2012 |
20120017047 | DATA VAULTING IN EMERGENCY SHUTDOWN - A data storage apparatus includes a processor, a write cache in operable communication with the processor, an auxiliary storage device in operable communication with the write cache, and a temporary power source in electrical communication with each of the processor, write cache, and auxiliary storage device for supplying power in the event of a loss of primary, external power. The auxiliary storage device is dimensioned to have sufficient size for holding dirty pages cached in the write cache, and the temporary power source is configured with sufficient energy for, subsequent to the loss of the external power, powering the processor, the write cache, and the auxiliary storage device for an entire duration of a backup process. | 01-19-2012 |
20120017048 | INTER-FRAME TEXEL CACHE - Methods, apparatuses, and systems are presented for caching. A cache memory area may be used for storing data from memory locations in an original memory area. The cache memory area may be used in conjunction with a repeatedly updated record of storage associated with the cache memory area. The repeatedly updated record of storage can thus provide a history of data storage associated with the cache memory area. The cache memory area may be loaded with entries previously stored in the cache memory area, by utilizing the repeatedly updated record of storage. In this manner, the record may be used to “warm up” the cache memory area, loading it with data entries that were previously cached and may be likely to be accessed again if repetition of memory accesses exists in the span of history captured by the repeatedly updated record of storage. | 01-19-2012 |
20120023294 | MEMORY DEVICE AND METHOD HAVING ON-BOARD PROCESSING LOGIC FOR FACILITATING INTERFACE WITH MULTIPLE PROCESSORS, AND COMPUTER SYSTEM USING SAME - A memory device includes an on-board processing system that facilitates the ability of the memory device to interface with a plurality of processors operating in a parallel processing manner. The processing system includes circuitry that performs processing functions on data stored in the memory device in an indivisible manner. More particularly, the system reads data from a bank of memory cells or cache memory, performs a logic function on the data to produce results data, and writes the results data back to the bank or the cache memory. The logic function may be a Boolean logic function or some other logic function. | 01-26-2012 |
20120030427 | Cache Control Method, Node Apparatus, Manager Apparatus, and Computer System - Disclosed is a computer system that includes a first apparatus, which stores data and metadata in a storage, and multiple units of a second apparatus, which store a copy of data and metadata in the first apparatus in a cache. The first apparatus acquires throughput achieved when the units of the second apparatus access the data in the storage as first access information, acquires throughput achieved when the units of the second apparatus access data thereof as second access information, and selects either a first judgment mode or a second judgment mode in accordance with the first access information and the second access information. This reduces the amount of network traffic for metadata acquisition, thereby increasing the speed of data access. | 02-02-2012 |
20120036324 | METHOD AND SYSTEM FOR REVISITING PRIOR NAVIGATED PAGES AND PRIOR EDITS - A system and method for navigating or editing may include storing multiple forward or redo stacks and a single back or undo stack. The forward or undo stacks may include separate stacks for each page from which navigation occurs to a page of lower hierarchical level or for each operation for which another operation is subsequently performed. Positions of references in the forward or redo stacks may be modified in response to navigations or edits to place a last navigated page or operation at the top of the stack. The timing of such movement of references may be optimized. | 02-09-2012 |
20120036325 | MEMORY COMPRESSION POLICIES - Techniques are disclosed for managing memory within a virtualized system that includes a memory compression cache. Generally, the virtualized system may include a hypervisor configured to use a compression cache to temporarily store memory pages that have been compressed to conserve memory space. A “first-in touch-out” (FITO) list may be used to manage the size of the compression cache by monitoring the compressed memory pages in the compression cache. Each element in the FITO list corresponds to a compressed page in the compression cache. Each element in the FITO list records a time at which the corresponding compressed page was stored in the compression cache (i.e. an age). A size of the compression cache may be adjusted based on the ages of the pages in the compression cache. | 02-09-2012 |
20120042125 | Systems and Methods for Efficient Sequential Logging on Caching-Enabled Storage Devices - A computer-implemented method for efficient sequential logging on caching-enabled storage devices may include 1) identifying a storage device with a cache, 2) allocating space on the storage device for a sequential log, 3) calculating a target size for the sequential log based at least in part on an input/output load directed to the sequential log, and then 4) restricting the sequential log to a portion of the allocated space corresponding to the target size. Various other methods, systems, and computer-readable media are also disclosed. | 02-16-2012 |
20120047328 | DATA DE-DUPLICATION FOR SERIAL-ACCESS STORAGE MEDIA - Data storage and retrieval methods and apparatus are provided for facilitating data de-duplication for serial-access storage media such as tape. During data storage, input data is divided into a succession of chunks and, for each chunk, a corresponding data item is written to the storage media. The data item comprises the chunk data itself where it is the first occurrence of that data, and otherwise comprises a chunk-data identifier identifying that chunk of subject data. To facilitate reconstruction of the original data on read-back from the storage media a cache ( | 02-23-2012 |
20120072665 | Caching of a Site Model in a Hierarchical Modeling System for Network Sites - Disclosed are various embodiments for caching of a hierarchical model of a network site. Upon receiving a request to resolve a network site, a hierarchical site model associated with a network site is retrieved. A directory model associated with the network site is also retrieved. A caching process is initiated that retrieves at least a subset of page models and loads them into a cache. The caching process is executed in parallel with the processing of the hierarchical site model. | 03-22-2012 |
20120072666 | INTEGRATED CIRCUIT COMPRISING TRACE LOGIC AND METHOD FOR PROVIDING TRACE INFORMATION - An integrated circuit comprises trace logic for operably coupling to at least one memory element and for providing trace information for a signal processing system. The trace logic comprises trigger detection logic for detecting at least one trace trigger, memory access logic arranged to perform, upon detection of the at least one trace trigger, at least one read operation for at least one memory location of the at least one memory element associated with the at least one detected trigger, memory content message generation logic arranged to generate at least one memory content message comprising information relating to a result of the at least one read operation performed by the memory access logic, and output logic for outputting the at least one memory content message. | 03-22-2012 |
20120079199 | INTELLIGENT WRITE CACHING FOR SEQUENTIAL TRACKS - Method, system, and computer program product embodiments for, in a computing storage environment for destaging data from nonvolatile storage (NVS) to a storage unit, write caching for sequential tracks by a processor device are provided. If a first track is determined to be sequential, and an earlier track is also determined to be sequential, a temporal bit associated with the earlier track is cleared to allow for destage of data of the earlier track. If a temporal bit for one of a plurality of additional tracks in one of a plurality of strides in a modified cache is determined to be not set, a stride associated with the one of the plurality of additional tracks is selected for a destage operation. If the NVS exceeds a predetermined storage threshold, a predetermined one of the plurality of strides is selected for the destage operation. | 03-29-2012 |
20120089781 | MECHANISM FOR RETRIEVING COMPRESSED DATA FROM A STORAGE CLOUD - A cloud storage appliance receives one or more read requests for data stored in a storage cloud. The cloud storage appliance determines, for a time period, a total amount of bandwidth that will be used to retrieve the requested data from the storage cloud. The cloud storage appliance then determines an amount of remaining bandwidth for the time period. The cloud storage appliance retrieves the requested data from the storage cloud in the time period to satisfy the one or more read requests. The cloud storage appliance additionally retrieves a quantity of unrequested data from the storage cloud in the time period, wherein the quantity of retrieved unrequested data is based on the amount of remaining bandwidth for the time period. | 04-12-2012 |
20120096223 | LOW-POWER AUDIO DECODING AND PLAYBACK USING CACHED IMAGES - A particular method includes loading one or more memory images into a multi-way cache. The memory images are associated with an audio decoder, and the multi-way cache is accessible to a processor. Each of the memory images is sized not to exceed a page size of the multi-way cache. | 04-19-2012 |
20120096224 | OPPORTUNISTIC BLOCK TRANSMISSION WITH TIME CONSTRAINTS - A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed. | 04-19-2012 |
20120117323 | STORE QUEUE SUPPORTING ORDERED AND UNORDERED STORES - Some described embodiments provide a system that performs stores in a memory system. During operation, the system receives a store for a first thread. The system then creates an entry for the store in a store queue for the first thread. While creating the entry, the system requests a store-mark for a cache line for the store, wherein the store-mark for the cache line indicates that one or more store queue entries are waiting to be committed to the cache line. The system then receives a response to the request for the store-mark, wherein the response indicates that the cache line for the store is store-marked. Upon receiving the response, the system updates a set of ordered records for the first thread by inserting data for the store in the set of ordered records, wherein the set of ordered records include store-marked stores for the first thread. | 05-10-2012 |
20120117324 | VIRTUAL CACHE WINDOW HEADERS FOR LONG TERM ACCESS HISTORY - A method of virtual cache window headers for long term access history is disclosed. The method may include steps (A) to (C). Step (A) may receive a request at a circuit from a host to access an address in a memory. The circuit generally controls the memory and a cache. Step (B) may update the access history in a first of the headers in response to the request. The headers may divide an address space of the memory into a plurality of windows. Each window generally includes a plurality of subwindows. Each subwindow may be sized to match one of a plurality of cache lines in the cache. A first of the subwindows in a first of the windows may correspond to the address. Step (C) may copy data from the memory to the cache in response to the access history. | 05-10-2012 |
20120117325 | METHOD AND DEVICE FOR PROCESSING DATA CACHING - The present invention discloses a method and device for processing data caching, wherein the method includes: storing cached data into a memory; after reading out the cached data from a memory space address for storing the cached data in the memory, judging whether the cached data that have been read out are the same as the cached data to be written before the storing, if so, then deciding that the memory space for storing the cached data in the memory is normal; if not, then deciding that the memory space for storing the cached data in the memory is abnormal; and when the cached data is stored during the subsequent data caching process, storing the cached data only into the memory spaces in normal state in the memory. | 05-10-2012 |
20120124290 | Integrated Memory Management and Memory Management Method - An integrated memory management device according to an example of the invention comprises an acquiring unit acquiring a read destination logical address from a processor, an address conversion unit converting the read destination logical address into a read destination physical address of a non-volatile main memory, an access unit reading, from the non-volatile main memory, data that corresponds to the read destination physical address and has a size that is equal to a block size or an integer multiple of the page size of the non-volatile main memory, and transmission unit transferring the read data to a cache memory of the processor having a cache size that depends on the block size or the integer multiple of the page size of the non-volatile main memory. | 05-17-2012 |
20120131276 | INFORMATION APPARATUS AND METHOD FOR CONTROLLING THE SAME - An object is to efficiently set configurations of a storage apparatus. Provided is an information apparatus communicably coupled to a storage apparatus 10, which validates a script executed by the storage apparatus 10 for setting a configuration of the storage apparatus 10, the information apparatus generating configurations of the storage apparatus 10 when after each command described in a script is executed sequentially; and performing consistency validation on the script by determining whether or not the command described in the script is normally executable in a case the command is executed on an assumption that the storage apparatus 10 has the configuration immediately before the execution. | 05-24-2012 |
20120131277 | ACTIVE MEMORY PROCESSOR SYSTEM - In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode. | 05-24-2012 |
20120131278 | DATA STORAGE APPARATUS AND METHODS - Data storage apparatus and methods are disclosed. A disclosed example data storage apparatus comprises a cache layer and a processor in communication with the cache layer. The processor is to dynamically enable or disable the cache layer via a cache layer enable line based on a data store access type. | 05-24-2012 |
20120137072 | HYBRID ACTIVE MEMORY PROCESSOR SYSTEM - In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode. The present invention is further configured to enable processing core and memory utilization by external systems through virtualization. | 05-31-2012 |
20120151140 | SYSTEMS AND METHODS FOR DESTAGING STORAGE TRACKS FROM CACHE - Systems and methods for destaging storage tracks from cache are provided. One system includes a cache and a processor coupled to the cache. The cache stores data in multiple storage tracks and each storage track includes an associated multi-bit counter. The processor is configured to perform the following method. One method includes writing data to the plurality of storage tracks and incrementing the multi-bit counter on each respective storage track a predetermined amount each time the processor writes to a respective storage track. The method further includes scan each of the storage tracks in each of multiple scan cycles, decrementing each multi-bit counter each scan cycle, and destaging each storage track including a zero count. Also provided are physical computer storage mediums including a computer program product for performing the above method. | 06-14-2012 |
20120151141 | DETERMINING SERVER WRITE ACTIVITY LEVELS TO USE TO ADJUST WRITE CACHE SIZE - Provided are a computer program product, system, and method for determining server write activity levels to use to adjust write cache size. Information on server write activity to the cache is gathered. The gathered information on write activity is processed to determine a server write activity level comprising one of multiple write activity levels indicating a level of write activity. The determined server write activity level is transmitted to a storage server having a write cache, wherein the storage server uses the determined server write activity level to determine whether to adjust a size of the storage server write cache. | 06-14-2012 |
20120151142 | TRANSFER OF BUS-BASED OPERATIONS TO DEMAND-SIDE MACHINES - An L2 cache, method and computer program product for transferring an inbound bus operation to a processor side handling machine. The method includes a bus operation handling machine accepting the inbound bus operation received over a system interconnect, the bus operation handling machine identifying a demand operation of the processor side handling machine that will complete the bus operation, the bus operation handling machine sending the identified demand operation to the processor side handling machine, and the processor side handling machine performing the identified demand operation. | 06-14-2012 |
20120151143 | TECHNIQUES FOR MANAGING DATA IN A STORAGE CONTROLLER - A technique for limiting an amount of write data stored in a cache memory includes determining a usable region of a non-volatile storage (NVS), determining an amount of write data in a current write request for the cache memory, and determining a failure boundary associated with the current write request. A count of the write data associated with the failure boundary is maintained. The current write request for the cache memory is rejected when a sum of the count of the write data associated with the failure boundary and the write data in the current write request exceeds a determined percentage of the usable region of the NVS. | 06-14-2012 |
20120159071 | STORAGE SUBSYSTEM AND ITS LOGICAL UNIT PROCESSING METHOD - When a command to restore a logical unit is issued after a command to delete the logical unit, the logical unit is restored easily. | 06-21-2012 |
20120166727 | WEATHER ADAPTIVE ENVIRONMENTALLY HARDENED APPLIANCES - Embodiments of the present invention provide a method, system and computer program product for weather adaptive environmentally hardened appliances. In an embodiment of the invention, a method for weather adaptation of an environmentally hardened computing appliance includes determining a location of an environmentally hardened computing appliance. Thereafter, a weather forecast including a temperature forecast can be retrieved for a block of time at the location. As a result, a cache policy for a cache of the environmentally hardened computing appliance can be adjusted to account for the weather forecast. | 06-28-2012 |
20120173818 | DETECTING ADDRESS CONFLICTS IN A CACHE MEMORY SYSTEM - A cache memory includes a data array that stores memory blocks, a directory of contents of the data array, and a cache controller that controls access to the data array. The cache controller includes an address conflict detection system having a set-associative array configured to store at least tags of memory addresses of in-flight memory access transactions. The address conflict detection system accesses the set-associative array to detect if a target address of an incoming memory access transaction conflicts with that of an in-flight memory access transaction and determines whether to allow the incoming transaction memory access transaction to proceed based upon the detection. | 07-05-2012 |
20120185648 | STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary method, system, and computer program embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap. | 07-19-2012 |
20120185649 | VOLUME RECORD DATA SET OPTIMIZATION APPARATUS AND METHOD - A method for optimizing a plurality of volume records stored in cache may include monitoring a volume including multiple data sets, wherein each data set is associated with a volume record, and each volume record is stored in a volume record data set. The method may include tracking read and write operations to each of the data sets over a period of time. The method may further include reorganizing the volume records in the volume record data set such that volume records for data sets with a larger number of read operations relative to write operations are grouped together, and volume records for data sets with a smaller number of read operations relative to write operation are grouped together. A corresponding apparatus and computer program product are also disclosed. | 07-19-2012 |
20120191910 | PROCESSING CIRCUIT AND METHOD FOR READING DATA - A processing circuit includes a processing unit and a data buffer. When the processing unit receives a load instruction and determines that the load instruction has a load-use condition, the processing unit stores specific data into the data buffer, where the specific data is loaded by executing the load instruction. | 07-26-2012 |
20120191911 | SYSTEM AND METHOD FOR INCREASING CACHE SIZE - A system and method for increasing cache size is provided. Generally, the system contains a memory and a processor. The processor is configured by the memory to perform the steps of: categorizing storage blocks within a storage device as within a first category of storage blocks if the storage blocks that are available to the system for storing data when needed; categorizing storage blocks within the storage device as within a second category of storage blocks if the storage blocks contain application data therein; and categorizing storage blocks within the storage device as within a third category of storage blocks if the storage blocks are storing cached data and are available for storing application data if no first category of storage blocks are available to the system. | 07-26-2012 |
20120198156 | SELECTIVE CACHE ACCESS CONTROL APPARATUS AND METHOD THEREOF - A data processor is disclosed that definitively determines an effective address being calculated and decoded will be associated with an address range that includes a memory local to a data processor unit, and will disable a cache access based upon a comparison between a portion of a base address and a corresponding portion of an effective address input operand. Access to the local memory can be accomplished through a first port of the local memory when it is definitively determined that the effective address will be associated with an address range. Access to the local memory cannot be accomplished through the first port of the local memory when it is not definitively determined that the effective address will be associated with the address range. | 08-02-2012 |
20120198157 | GUEST INSTRUCTION TO NATIVE INSTRUCTION RANGE BASED MAPPING USING A CONVERSION LOOK ASIDE BUFFER OF A PROCESSOR - A method for translating instructions for a processor. The method includes accessing a plurality of guest instructions that comprise multiple guest branch instructions, and assembling the plurality of guest instructions into a guest instruction block. The guest instruction block is converted into a corresponding native conversion block. The native conversion block is stored into a native cache. A mapping of the guest instruction block to corresponding native conversion block is stored in a conversion look aside buffer. Upon a subsequent request for a guest instruction, the conversion look aside buffer is indexed to determine whether a hit occurred, wherein the mapping indicates whether the guest instruction has a corresponding converted native instruction in the native cache. The converted native instruction is forwarded for execution in response to the hit. | 08-02-2012 |
20120198158 | Multi-Channel Cache Memory - A cache memory including: a plurality of parallel input ports configured to receive, in parallel, memory access requests wherein each parallel input port is operable to receive a memory access request for any one of a plurality of processing units; and a plurality of cache blocks wherein each cache block is configured to receive memory access requests from a unique one of the plurality of input ports such that there is a one-to-one mapping between the plurality of parallel input ports and the plurality of cache blocks and wherein each of the plurality of cache blocks is configured to serve a unique portion of an address space of the memory. | 08-02-2012 |
20120198159 | INFORMATION PROCESSING DEVICE - An information processing device of the invention includes a measurement section which detects the changes in the uses of a built-in memory and an external memory, and a control section which monitors the measurement result from the measurement section, changes the configuration of the built-in memory, transfers the data stored in the built-in memory and the external memory, and changes the external memory area and the built-in memory area used by the CPU and other bus master devices, wherein it is possible to detect the changes in the memory utilization efficiency that cannot be predicted by static analysis, and to maintain an optimal memory configuration. | 08-02-2012 |
20120203967 | REDUCING INTERPROCESSOR COMMUNICATIONS PURSUANT TO UPDATING OF A STORAGE KEY - Processing within a multiprocessor computer system is facilitated by: deciding by a processor, pursuant to processing of a request to update a previous storage key to a new storage key, whether to purge the previous storage key from, or update the previous storage key in, local processor cache of the multiprocessor computer system. The deciding includes comparing a bit value(s) of one or more required components of the previous storage key to respective predefined allowed stale value(s) for the required component(s), and leaving the previous storage key in local processor cache if the bit value(s) of the required component(s) in the previous storage key equals the respective predefined allowed stale value(s) for the required component(s). By selectively leaving the previous storage key in local processor cache, interprocessor communication pursuant to processing of the request to update the previous storage key to the new storage key is minimized. | 08-09-2012 |
20120210064 | EXTENDER STORAGE POOL SYSTEM - Various embodiments for managing data in a computing storage environment by a processor device are provided. In one such embodiment, by way of example only, an extender storage pool system is configured for at least one of a source and a target storage pool to expand an available storage capacity for the at least one of the source and the target storage pool. A most recent snapshot of the data is sent to the extender storage pool system. The most recent snapshot of the data is stored on the extender storage pool system as a last replicated snapshot of the data. | 08-16-2012 |
20120210065 | TECHNIQUES FOR MANAGING MEMORY IN A MULTIPROCESSOR ARCHITECTURE - Techniques for managing memory in a multiprocessor architecture are presented. Each processor of the multiprocessor architecture includes its own local memory. When data is to be removed from a particular local memory or written to storage that data is transitioned to another local memory associated with a different processor of the multiprocessor architecture. If the data is then requested from the processor, which originally had the data, then the data is acquired from a local memory of the particular processor that received and now has the data. | 08-16-2012 |
20120210066 | SYSTEMS AND METHODS FOR A FILE-LEVEL CACHE - A multi-level cache comprises a plurality of cache levels, each configured to cache I/O request data pertaining to I/O requests of a different respective type and/or granularity. The multi-level cache may comprise a file-level cache that is configured to cache I/O request data at a file-level of granularity. A file-level cache policy may comprise file selection criteria to distinguish cacheable files from non-cacheable files. The file-level cache may monitor I/O requests within a storage stage, and may service I/O requests from a cache device. | 08-16-2012 |
20120210067 | MIRRORING DEVICE AND MIRRORING RECOVERY METHOD - To provide a mirroring device which does not need a competition control function dedicated for a restoring process without halting other access commands during the restoring process of a mirroring. | 08-16-2012 |
20120215979 | CACHE FOR STORING MULTIPLE FORMS OF INFORMATION AND A METHOD FOR CONTROLLING A CACHE STORING MULTIPLE FORMS OF INFORMATION - A cache is provided, including a data array having a plurality of entries configured to store a plurality of different types of data, and a tag array having a plurality of entries and configured to store a tag of the data stored at a corresponding entry in the data array and further configured to store an identification of the type of data stored in the corresponding entry in the data array. | 08-23-2012 |
20120215980 | RESTORING DATA BACKED UP IN A CONTENT ADDRESSED STORAGE (CAS) SYSTEM - In one example, a method of restoring data backed up in a content addressed storage system may include retrieving a recipe and appended storage addresses from a first storage node of content addressed storage, where the recipe may include instructions for generating a data structure from two or more data pieces, and the two or more data pieces may be resident in locations identified by the appended storage addresses. The example method may further include populating a cache with the appended storage addresses for the two or more data pieces. As well the method may further include retrieving, and populating the cache with, the two or more data pieces without looking up a storage address for any of the two or more data pieces in an index, and restoring the data structure using the retrieved two or more data pieces in the cache. | 08-23-2012 |
20120215981 | RECYCLING OF CACHE CONTENT - A method of operating a storage system comprises detecting a cut in an external power supply, switching to a local power supply, preventing receipt of input/output commands, copying content of cache memory to a local storage device and marking the content of the cache memory that has been copied to the local storage device. When a resumption of the external power supply is detected, the method continues by charging the local power supply, copying the content of the local storage device to the cache memory, processing the content of the cache memory with respect to at least one storage volume and receiving input/output commands. When detecting a second cut in the external power supply, the system switches to the local power supply, prevents receipt of input/output commands, and copies to the local storage device only the content of the cache memory that is not marked as present. | 08-23-2012 |
20120221792 | OPPORTUNISTIC BLOCK TRANSMISSION WITH TIME CONSTRAINTS - A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed. | 08-30-2012 |
20120226860 | COMPUTER SYSTEM AND DATA MIGRATION METHOD - A path is formed between a host computer and storage apparatuses without depending on the configuration of the host computer and a network and a plurality of volumes having a copy function are migrated between storage apparatuses while keeping the latest data. | 09-06-2012 |
20120226861 | STORAGE CONTROLLER AND METHOD OF CONTROLLING STORAGE CONTROLLER - Provided is a storage controller and method of controlling same which, if part of a storage area of a local memory is used as cache memory, enable an access conflict for access to a parallel bus connected to the local memory to be avoided. | 09-06-2012 |
20120226862 | EVENT TRANSPORT SYSTEM - A method for communicating events from an event source to an event consumer is disclosed herein. In one embodiment, such a method includes monitoring an event generation rate associated with an event source. The method further determines if the event generation rate exceeds a threshold rate. Upon receiving an event from the event source, the method generates a condensed version of the event if the event generation rate exceeds the threshold rate. The method then communicates the condensed version to an event consumer. A corresponding system and computer program product are also disclosed. | 09-06-2012 |
20120226863 | INFORMATION PROCESSING DEVICE, MEMORY ACCESS CONTROL DEVICE, AND ADDRESS GENERATION METHOD THEREOF - An information processing device according to the present invention includes an operation unit that outputs an access request, a storage unit including a plurality of connection ports and a plurality of memories capable of a simultaneous parallel process that has an access unit of a plurality of word lengths for the connection ports, and a memory access control unit that distributes a plurality access addresses corresponding to the access request received for each processing cycle from the operation unit, and generates an address in a port including a discontinuous word by one access unit for each of the connection ports. | 09-06-2012 |
20120226864 | TIERED DATA MANAGEMENT METHOD AND SYSTEM FOR HIGH PERFORMANCE DATA MONITORING - A method for managing memory in a system for an application, comprising: assigning a first block (i.e., a big block) of the memory to the application when the application is initiated, the first block having a first size, the first block being assigned to the application until the application is terminated; dividing the first block into second blocks (i.e., intermediate blocks), each second block having a same second size, a second block of the second blocks for containing data for one or more components of a single data structure to be accessed by one thread of the application at a time; and, dividing the second block into third blocks (i.e., small blocks), each third block having a same third size, a third block of the third blocks for containing data for a single component of the single data structure. | 09-06-2012 |
20120233405 | Caching Method and System for Video Coding - A method of caching reference data in a reference data cache is provided that includes receiving an address of a reference data block in the reference data cache, wherein the address includes an x coordinate and a y coordinate of the reference data block in a reference block of pixels and a reference block identifier specifying which of a plurality of reference blocks of pixels includes the reference data block, computing an index of a set of cache lines in the reference data cache using bits from the x coordinate and bits from the y coordinate, using the index and a tag comprising the reference block identifier to determine whether the reference data block is in the set of cache lines, and retrieving the reference data block from reference data storage when the reference data block is not in the set of cache lines. | 09-13-2012 |
20120233406 | STORAGE APPARATUS, AND CONTROL METHOD AND CONTROL APPARATUS THEREFOR - A control apparatus, coupled to a storage medium via communication links, controls data write operations to the storage medium. A cache memory is configured to store a temporary copy of first data written in the storage medium. A processor receives second data with which the first data in the storage medium is to be updated, and determines whether the received second data coincides with the first data, based on comparison data read out of the storage medium, when no copy of the first data is found in the cache memory. When the second data is determined to coincide with the first data, the processor determines not to write the second data into the storage medium. | 09-13-2012 |
20120239882 | CONTROL APPARATUS AND METHOD, AND STORAGE APPARATUS - In a storage apparatus, in the case where a data block to be written to a storage medium is a zero data block containing only zero data, a zero data information memory stores zero data identification information indicating that the data block is a zero data block. A control apparatus receives a data block from an access requesting apparatus in association with a write request issued by the access requesting apparatus for writing the data block a specified number of times to a predetermined storage area of the storage medium, and when determining that the data block is a zero data block containing only zero data, sets zero data identification information in the zero data information memory, and when completing the setting of the zero data identification information, sends the access requesting apparatus a completion notice of the writing to the storage medium. | 09-20-2012 |
20120246405 | DELAYED FREEING OF DATA STORAGE BLOCKS - A memory block that includes a physical storage page holding data of a data storage application in a page buffer can be cached in a page buffer upon the memory block being designated for a change in status from a used status to a shadow status. Upon occurrence of a trigger event, all pages stored in the page buffer can be processed in a first batch process that can include converting each of the pages in the page buffer from the used status to the shadow status and emptying the page buffer. Upon receiving a call to free the pages in the page buffer from the shadow status to a free status without the trigger event occurring, the pages in the page buffer can be converted from the used status directly to the free status in a second batch process. Related methods, systems, and articles of manufacture are also disclosed. | 09-27-2012 |
20120254538 | STORAGE APPARATUS AND COMPUTER PROGRAM PRODUCT - According to an embodiment, a storage apparatus includes a storage unit configured to store a plurality of pieces of data; a communication unit configured to communicate with a plurality of external devices each of which includes a first cache memory in which at least part of the plurality of pieces of data are stored; a write unit configured to write data into the storage unit when the communication unit receives the write request to write the data transmitted from one of the plurality of external devices; and a controller configured to control the communication unit such that the data is transmitted to another external device different from the one external device that has requested the write request. | 10-04-2012 |
20120254539 | SYSTEMS AND METHODS FOR MANAGING CACHE DESTAGE SCAN TIMES - A system includes a cache and a processor. The processor is configured to utilize a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilize a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. One method includes utilizing a first thread to continually determine a desired scan time for scanning the plurality of storage tracks in the cache and utilizing a second thread to continually control an actual scan time of the plurality of storage tracks in the cache based on the continually determined desired scan time. | 10-04-2012 |
20120265937 | DISTRIBUTED STORAGE NETWORK INCLUDING MEMORY DIVERSITY - A dispersed storage (DS) unit a processing module and a plurality of hard drives. The processing module is operable to maintain states for at least some of the plurality of hard drives. The processing module is further operable to receive a memory access request regarding an encoded data slice and identify a hard drive of the plurality of hard drives based on the memory access request. The processing module is further operable to determine a state of the hard drive. When the hard drive is in a read state and the memory access request is a write request, the processing module is operable to queue the write request, change from the read state to a write state in accordance with a state transition process, and, when in the write state, perform the write request to store the encoded data slice in the hard drive. | 10-18-2012 |
20120272002 | Detection and Control of Resource Congestion by a Number of Processors - In an embodiment, a system includes a resource. The system also includes a first processor having a load/store functional unit. The load/store functional unit is to attempt to access the resource based on access requests. The first processor includes a congestion detection logic to detect congestion of access of the resource based on a consecutive number of negative acknowledgements received in response to the access requests prior to receipt of a positive acknowledgment in response to one of the access requests within a first time period. | 10-25-2012 |
20120278555 | OPPORTUNISTIC BLOCK TRANSMISSION WITH TIME CONSTRAINTS - A technique for determining a data window size allows a set of predicted blocks to be transmitted along with requested blocks. A stream enabled application executing in a virtual execution environment may use the blocks when needed. | 11-01-2012 |
20120290789 | PREFERENTIALLY ACCELERATING APPLICATIONS IN A MULTI-TENANT STORAGE SYSTEM VIA UTILITY DRIVEN DATA CACHING - A system may include multi-tenant electronic storage for hosting a plurality of applications having heterogeneous Input/Output (I/O) characteristics, relative importance levels, and Service-Level Objectives (SLOs). The system may also include a management interface for managing the multi-tenant electronic storage, where the management interface is configured to receive a storage resource arbitration policy based on at least one of a workload type, an SLO, or a priority for an application. The system may further include control programming configured to receive an association of a particular I/O stream with a particular application generating the I/O stream, where the association of the I/O stream with the application was determined by analyzing at least one I/O characteristic of the I/O stream, and determine at least one of a cache size or a caching policy for the application based on the association of the I/O stream with the application and the storage resource arbitration policy. | 11-15-2012 |
20120290790 | METHOD, SERVER, COMPUTER PROGRAM AND COMPUTER PROGRAM PRODUCT FOR CACHING - It presented a method comprising the steps of: determining, in a caching server of a telecommunication network, a user profile to analyse; obtaining, in the caching server, a group of user profiles; obtaining correlation measurements for each user profile in the group of user profiles in relation to the user profile to analyse; and calculating a content caching priority for at least one piece of content of a content history associated with the group of user profiles, taking the correlation measurement into account. A corresponding server, computer program and computer program product are also provided. | 11-15-2012 |
20120290791 | PROCESSOR AND METHOD FOR EXECUTING LOAD OPERATION THEREOF - A processor and a method for executing load operation and store operation thereof are provided. The processor includes a data cache and a store buffer. When executing a store operation, if the address of the store operation is the same as the address of an existing entry in the store buffer, the data of the store operation is merged into the existing entry. When executing a load operation, if there is a memory dependency between an existing entry in the store buffer and the load operation, and the existing entry includes the complete data required by the load operation, the complete data is provided by the existing entry alone. If the existing entry does not include the complete data, the complete data is generated by assembling the existing entry and a corresponding entry in the data cache. | 11-15-2012 |
20120290792 | MEDIA DEVICE WITH INTELLIGENT CACHE UTILIZATION - A portable media device and a method for operating a portable media device are disclosed. According to one aspect, a battery-powered portable media device can manage use of a mass storage device to efficiently utilize battery power. By providing a cache memory and loading the cache memory so as to provide skip support, battery power for the portable media device can be conserved (i.e., efficiently consumed). According to another aspect, a portable media device can operate efficiently in a seek mode. The seek mode is an operational mode of the portable media device in which the portable media device automatically scans through media items to assist a user in selecting a desired one of the media items. | 11-15-2012 |
20120303895 | HANDLING HIGH PRIORITY REQUESTS IN A SEQUENTIAL ACCESS STORAGE DEVICE HAVING A NON-VOLATILE STORAGE CACHE - Provided are a computer program product, system, and method for handling high priority requests in a sequential access storage device. Received modified tracks for write requests are cached in a non-volatile storage device integrated with the sequential access storage device. A destage request is added to a request queue for a received write request having modified tracks for the sequential access storage medium cached in the non-volatile storage device. A read request indicting a priority is received. A determination is made of a priority of the read request as having a first priority or a second priority. The read request is added to the request queue in response to determining that the determined priority is the first priority. The read request is processed at a higher priority than the read and destage requests in the request queue in response to determining that the determined priority is the second priority. | 11-29-2012 |
20120303896 | INTELLIGENT CACHING - Intelligent caching includes defining a cache policy for a data source, selecting parameters of data in the data source to monitor, the parameters forming a portion of the cache policy, and monitoring the data source for an event based on the cache policy. Upon an occurrence of an event, the intelligent caching also includes retrieving target data subject to the cache policy from a first location and moving the target data to a second location. | 11-29-2012 |
20120303897 | CONFIGURABLE SET ASSOCIATIVE CACHE WAY ARCHITECTURE - System and method for dynamically configuring a set associative cache way architecture based on an application is disclosed. In one embodiment, a memory size required for the application is determined by a cache controller. Further, a required cache way size and a required number of cache ways in a set associative cache way are computed based on the determined memory size. Furthermore, the set associative cache way architecture is configured to power off selected areas of the set associative cache way based on the computed required cache way size and the required number of cache ways for running the application. | 11-29-2012 |
20120311261 | STORAGE SYSTEM AND STORAGE CONTROL METHOD - A storage system is provided with a memory region, a cache memory region, and a processor. The memory region stores the time relation information that indicates a time relationship of a data element that has been stored into the cache memory region and that is to be written to the logical region and a snapshot acquisition point of time to the primary volume. The processor judges whether or not the data element that has been stored into the cache memory region is a snapshot configuration element based on the time relation information for the data element that is to be written to a logical region of a write destination that conforms to the write request that specifies the primary volume and that has been stored into the cache memory region. In the case in which the result of the judgment is positive, the processor saves the data element to the secondary volume for holding a snapshot image in which the snapshot configuration element is a configuration element, and a data element of a write target is then stored into the cache memory region. | 12-06-2012 |
20120311262 | MEMORY CELL PRESETTING FOR IMPROVED MEMORY PERFORMANCE - Memory cell presetting for improved performance including a system that includes a memory, a cache, and a memory controller. The memory includes memory lines made up of memory cells. The cache includes cache lines that correspond to a subset of the memory lines. The memory controller is in communication with the memory and the cache. The memory controller is configured to perform a method that includes scheduling a request to set memory cells of a memory line to a common specified state in response to a cache line attaining a dirty state. | 12-06-2012 |
20120311263 | SECTOR-BASED WRITE FILTERING WITH SELECTIVE FILE AND REGISTRY EXCLUSIONS - A method includes mounting a persistent volume of a data storage device of an electronic device. The persistent volume is based on a protected volume stored at the data storage device. The method also includes accessing the persistent volume to enable servicing access to the data storage device of the electronic device. | 12-06-2012 |
20120311264 | DATA MANAGEMENT METHOD, DEVICE, AND DATA CHIP - The present invention discloses a data management method, device and data chip. The data management method includes: receiving write data of a write request; writing the write data according to a current data management mode, where when the data management mode is a first mode, the write data of the write request is stored in an on-chip cache and when the data management mode is a second mode, the write data of the write request is stored in the on-chip cache and an off-chip memory chip; and receiving a read request of the write data, searching for the write data in the on-chip cache according to the read request, and if the write data cannot be obtained from the on-chip cache, obtaining the write data from the off-chip memory chip, thereby reducing power consumption for data access to external memory chips. | 12-06-2012 |
20120311265 | Read and Write Aware Cache - A mechanism is provided in a cache for providing a read and write aware cache. The mechanism partitions a large cache into a read-often region and a write-often region. The mechanism considers read/write frequency in a non-uniform cache architecture replacement polity. A frequently written cache line is placed in one of the farther banks. A frequently read cache line is place in one of the closer banks. The size ration between read-often and write-often regions may be static or dynamic. The boundary between the read-often region and the write-often region may be distinct or fuzzy. | 12-06-2012 |
20120317359 | PROCESSING A REQUEST TO RESTORE DEDUPLICATED DATA - For a restore request, at least a portion of a recipe that refers to chunks is read. Based on the recipe portion, a container having plural chunks is retrieved. From the recipe portion, it is identified which of the plural chunks of the container to save, where some of the chunks identified do not, at a time of the identifying, have to be presently communicated to a requester. The identified chunks are stored in a memory area from which chunks are read for the restore operation. | 12-13-2012 |
20120324163 | STORAGE CONTROL APPARATUS AND STORAGE CONTROL METHOD - In one of the storage control apparatuses in the remote copy system which performs asynchronous remote copy between the storage control apparatuses, virtual logical volumes complying with Thin Provisioning are adopted as journal volumes to which journals are written. The controller in the one of the storage control apparatuses assigns a smaller actual area based on the storage apparatus than in case of assignment to the entire area of the journal volume, and adds a journal to the assigned actual area. If a new journal cannot be added, the controller performs wraparound, that is, overwrites the oldest journal in the assigned actual area by the new journal. | 12-20-2012 |
20120324164 | Programmable Memory Address - A method includes storing defined memory address segments and defined memory address segment attributes for a processor. The processor is operated in accordance with the defined memory address segments and defined memory address segment attributes. | 12-20-2012 |
20120324165 | MEMORY CONTROL DEVICE AND MEMORY CONTROL METHOD - According to one embodiment, a memory control device includes: a buffer memory; a cache memory performing caching for the buffer memory on a unit-data-by-unit-data basis; and an adding module adding ByteECC data to the unit data. | 12-20-2012 |
20120331227 | FACILITATING IMPLEMENTATION, AT LEAST IN PART, OF AT LEAST ONE CACHE MANAGEMENT POLICY - An embodiment may include circuitry to facilitate implementation, at least in part, of at least one cache management policy. The at least one policy may be based, at least in part, upon respective priorities of respective classifications of respective network traffic. The at least one policy may concern, at least in part, caching of respective information associated, at least in part, with the respective network traffic belonging to the respective classifications. Many alternatives, variations, and modifications are possible. | 12-27-2012 |
20120331228 | DYNAMIC CONTENT CACHING - A system for caching content including a server supplying at least one of static and non-static content elements, content distinguishing functionality operative to categorize elements of the non-static content as being either dynamic content elements or pseudodynamic content elements, and caching functionality operative to cache the pseudodynamic content elements. The static content elements are content elements which are identified by at least one of the server and metadata associated with the content elements as being expected not to change, the non-static content elements are content elements which are not identified by the server and/or by metadata associated with the content elements as being static content elements, the pseudodynamic content elements are non-static content elements which, based on observation, are not expected to change, and the dynamic content elements are non-static content elements which are not pseudodynamic. | 12-27-2012 |
20120331229 | LOAD BALANCING BASED UPON DATA USAGE - A method of load balancing can include segmenting data from a plurality of servers into usage patterns determined from accesses to the data. Items of the data can be cached in one or more servers of the plurality of servers according to the usage patterns. Each of the plurality of servers can be designated to cache items of the data of a particular usage pattern. A reference to an item of the data cached in one of the plurality of servers can be updated to specify the server of the plurality of servers within which the item is cached. | 12-27-2012 |
20120331230 | CONTROL BLOCK LINKAGE FOR DATABASE CONVERTER HANDLING - A system to load a plurality of converter pages of a datastore into a database cache, the plurality of converter pages comprising a plurality of converter inner pages, and a plurality of converter leaf pages, to allocate a control block in the database cache for each of the plurality of converter inner pages, the control block of a converter inner page comprising a pointer to a control block of a parent converter inner page and a pointer to a control block of each child converter page of the converter inner page, and to allocate a control block in the database cache for each of the plurality of converter leaf pages, the control block of a converter leaf page comprising a pointer to a control block of a parent converter inner page. | 12-27-2012 |
20130007366 | DELAYED INSTANT COPY OPERATION FOR SHORT-LIVED SNAPSHOTS - A method, including defining a snapshot referencing a source partition of a storage volume on a storage device, and receiving a request to write a block of data to the source partition. Upon receiving the write request, a delayed instant copy operation is initiated by allocating a target partition on the storage device and replacing, in the storage volume, the source partition with the target partition. A definition of a condition for completion of the delayed instant copy operation is received, and the delayed instant copy operation is completed upon the condition being met. | 01-03-2013 |
20130013859 | Structure-Based Adaptive Document Caching - Techniques for generating, updating, and transmitting a structure-based data representation of a document are described herein. The structure-based adaptive document caching techniques may effectively eliminate redundancy in data transmission by exploiting structures of the document to be transmitted. The described techniques partitions a document into a sequence of structures, differentiate between cache-worthy structures and cache-unworthy structures, and generating a structure-based data representation of the document. The techniques may transmit updated structures and instructions, instead of all data of the document, to update previously cached structures at a client device; thereby resulting in higher cache hit rates. | 01-10-2013 |
20130013860 | MEMORY CELL PRESETTING FOR IMPROVED MEMORY PERFORMANCE - Memory cell presetting for improved performance including a method for using a computer system to identify a region in a memory. The region includes a plurality of memory cells characterized by a write performance characteristic that has a first expected value when a write operation changes a current state of the memory cells to a desired state of the memory cells and a second expected value when the write operation changes a specified state of the memory cells to the desired state of the memory cells. The second expected value is closer than the first expected value to a desired value of the write performance characteristic. The plurality of memory cells in the region are set to the specified state, and the data is written into the plurality of memory cells responsive to the setting. | 01-10-2013 |
20130013861 | CACHING PERFORMANCE OPTIMIZATION - A method for managing data storage is described. The method includes receiving data from an external host at a peripheral storage device, detecting a file system type of the external host, and adapting a caching policy for transmitting the data to a memory accessible by the storage device, wherein the caching policy is based on the detected file system type. The detection of the file system type can be based on the received data. The detection bases can include a size of the received data. In some implementations, the detection of the file system type can be based on accessing the memory for file system type indicators that are associated with a unique file system type. Adapting the caching policy can reduce a number of data transmissions to the memory. The detected file system type can be a file allocation table (FAT) system type. | 01-10-2013 |
20130036267 | PLACEMENT OF DATA IN SHARDS ON A STORAGE DEVICE - A method, system and computer program product for placing data in shards on a storage device may include determining placement of a data set in one of a plurality of shards on the storage device. Each one of the shards may include a different at least one performance feature. Each different at least one performance feature may correspond to a different at least one predetermined characteristic associated with a particular set of data. The data set is cached in the one of the plurality of shards on the storage device that includes the at least one performance feature corresponding to the at least one predetermined characteristic associated with the data set being cached. | 02-07-2013 |
20130036268 | Implementing Vector Memory Operations - In one embodiment, the present invention includes an apparatus having a register file to store vector data, an address generator coupled to the register file to generate addresses for a vector memory operation, and a controller to generate an output slice from one or more slices each including multiple addresses, where the output slice includes addresses each corresponding to a separately addressable portion of a memory. Other embodiments are described and claimed. | 02-07-2013 |
20130042064 | SYSTEM FOR DYNAMICALLY ADAPTIVE CACHING - The present disclosure is directed to a system for dynamically adaptive caching. The system includes a storage device having a physical capacity for storing data received from a host. The system may also include a control module for receiving data from the host and compressing the data to a compressed data size. Alternatively, the data may also be compressed by the storage device. The control module may be configured for determining an amount of available space on the storage device and also determining a reclaimed space, the reclaimed space being according to a difference between the size of the data received from the host and the compressed data size. The system may also include an interface module for presenting a logical capacity to the host. The logical capacity has a variable size and may include at least a portion of the reclaimed space. | 02-14-2013 |
20130046933 | STORING DATA IN ANY OF A PLURALITY OF BUFFERS IN A MEMORY CONTROLLER - A memory controller containing one or more ports coupled to a buffer selection logic and a plurality of buffers. Each buffer is configured to store write data associated with a write request and each buffer is also coupled to the buffer selection logic. The buffer selection logic is configured to store write data associated with a write request from at least one of the ports in any of the buffers based on a priority of the buffers for each one of the ports. | 02-21-2013 |
20130054895 | COOPERATIVE MEMORY RESOURCE MANAGEMENT FOR VIRTUALIZED COMPUTING DEVICES - A computing device employs a cooperative memory management technique to dynamically balance memory resources between host and guest systems running therein. According to this cooperative memory management technique, memory that is allocated to the guest system is dynamically adjusted up and down according to a fairness policy that takes into account various factors including the relative amount of readily freeable memory resources in the host and guest systems and the relative amount of memory allocated to hidden applications in the host and guest systems. | 02-28-2013 |
20130054896 | SYSTEM MEMORY CONTROLLER HAVING A CACHE - A memory controller including a cache can be implemented in a system-on-chip. A cache allocation policy may be determined on the fly by the source of each memory request. The operators on the SoC allowed to allocate in the cache can be maintained under program control. Cache and system memory may be accessed simultaneously. This can result in improved performance and reduced power dissipation. Optionally, memory protection can be implemented, where the source of a memory request can be used to determine the legality of an access. This can simplifies software development when solving bugs involving non protected illegal memory accesses and can improves the system's robustness to the occurrence of errant processes. | 02-28-2013 |
20130060999 | SYSTEM AND METHOD FOR INCREASING READ AND WRITE SPEEDS OF HYBRID STORAGE UNIT - The present invention is to provide a system for increasing read and write speeds of a hybrid storage unit, which includes a cache controller connected to the hybrid storage unit and a computer respectively, and stores forward and backward mapping tables each including a plurality of fields. The hybrid storage unit is composed of at least one regular storage unit (e.g., an HDD) having a plurality of regular sections corresponding to forward fields respectively, and at least one high-speed storage unit (e.g., an SSD) having a plurality of high-speed storage sections corresponding to backward fields respectively with higher read and write speeds than the regular storage unit. The cache controller can make the high-speed storage section corresponding to each backward field correspond to the regular section corresponding to the forward field, thus allowing the computer to rapidly read and write data from and into the hybrid storage unit. | 03-07-2013 |
20130067168 | CACHING FOR A FILE SYSTEM - Aspects of the subject matter described herein relate to caching data for a file system. In aspects, in response to requests from applications and storage and cache conditions, cache components may adjust throughput of writes from cache to the storage, adjust priority of I/O requests in a disk queue, adjust cache available for dirty data, and/or throttle writes from the applications. | 03-14-2013 |
20130080704 | MANAGEMENT OF POINT-IN-TIME COPY RELATIONSHIP FOR EXTENT SPACE EFFICIENT VOLUMES - A storage controller receives a request to establish a point-in-time copy operation by placing a space efficient source volume in a point-in-time copy relationship with a space efficient target volume, wherein subsequent to being established the point-in-time copy operation is configurable to consistently copy the space efficient source volume to the space efficient target volume at a point in time. A determination is made as to whether any track of an extent is staging into a cache from the space efficient target volume or destaging from the cache to the space efficient target volume. In response to a determination that at least one track of the extent is staging into the cache from the space efficient target volume or destaging from the cache to the space efficient target volume, release of the extent from the space efficient target volume is avoided. | 03-28-2013 |
20130086320 | MULTICAST WRITE COMMANDS - Techniques for implementing a multicast write command are described. A data block may be destined for multiple targets. The targets may be included in a list. A multicast write command may include the list. Write commands may be sent to each target in the list. | 04-04-2013 |
20130086321 | FILE BASED CACHE LOADING - A method for loading a cache is disclosed. Data in a computer file is stored on a storage device. The computer file is associated with a computer program. The first step is to determine which logical memory blocks on the storage device correspond to the computer files ( | 04-04-2013 |
20130091328 | STORAGE SYSTEM - A storage system in an embodiment of this invention comprises a non-volatile storage area for storing write data from a host, a cache area capable of temporarily storing the write data before storing the write data in the non-volatile storage area, and a controller that determines whether to store the write data in the cache area or to store the write data in the non-volatile storage area without storing the write data in the cache area, and stores the write data in the determined area. | 04-11-2013 |
20130091329 | REDUCED LATENCY MEMORY COLUMN REDUNDANCY REPAIR - A memory column redundancy mechanism includes a memory having a number of data output ports each configured to output one data bit of a data element. The memory also includes a number of memory columns each connected to a corresponding respective data port. Each memory column includes a plurality of bit cells that are coupled to a corresponding sense amplifier that may differentially output a respective data bit from the plurality of bit cells on an output signal and a complemented output signal. The memory further includes an output selection unit that may select as the output data bit for a given data output port, one of the output signal of the sense amplifier associated with the given data output port or the complemented output signal of the sense amplifier associated with an adjacent data output port dependent upon a respective shift signal for each memory column. | 04-11-2013 |
20130097379 | STORAGE SYSTEM AND METHOD OF CONTROLLING STORAGE SYSTEM - It is provided a storage system for storing data requested by a host computer to be written, the storage system comprising: at least one processor, a cache memory and a cache controller. The cache memory includes a first memory which can be accessed by way of either access that can specify an access range by a line or access that continuously performs a read and a write. The cache controller includes a second memory which has a higher flexibility than the first memory in specifying an access range. The cache controller determines an address of an access destination upon reception of a request for an access to the cache memory from the at least one processor, and switches a request for an access to a specific address into an access to a corresponding address in the second memory. | 04-18-2013 |
20130097380 | METHOD FOR MAINTAINING MULTIPLE FINGERPRINT TABLES IN A DEDUPLICATING STORAGE SYSTEM - A system and method for managing multiple fingerprint tables in a deduplicating storage system. A computer system includes a storage medium, a first fingerprint table comprising a first plurality of entries, and a second fingerprint table comprising a second plurality of entries. Each of the first plurality of entries and the second plurality of entries are configured to store fingerprint related data corresponding to data stored in the storage medium. A storage controller is configured to select the first fingerprint table for storage of entries corresponding to data stored in the data storage medium that has been deemed more likely to be successfully deduplicated than other data stored in the data storage medium; and select the second fingerprint table for storage of entries corresponding to data stored in the data storage medium that has been deemed less likely to be successfully deduplicated than other data stored in the storage medium. | 04-18-2013 |
20130097381 | MANAGEMENT APPARATUS, MANAGEMENT METHOD, AND PROGRAM - There is provided a management apparatus including a management unit that manages, based on execution control information indicating an execution sequence of a plurality of applications, an execution area and a cache area of a recording medium which temporarily stores the applications when the applications are executed. | 04-18-2013 |
20130103903 | Methods And Apparatus For Reusing Prior Tag Search Results In A Cache Controller - Methods and apparatus are provided for reusing prior tag search results in a cache controller. A cache controller is disclosed that receives an incoming request for an entry in the cache having a first tag; determines if there is an existing entry in a buffer associated with the cache having the first tag; and reuses a tag access result from the existing entry in the buffer having the first tag for the incoming request. An indicator can be maintained in the existing entry to indicate whether the tag access result should be retained. Tag access results can optionally be retained in the buffer after completion of a corresponding request. The tag access result can be reused by (i) reallocating the existing entry to the incoming request if the indicator in the existing entry indicates that the tag access result should be retained; and/or (ii) copying the tag access result from the existing entry to a buffer entry allocated to the incoming request if a hazard is detected. | 04-25-2013 |
20130103904 | SYSTEM AND METHOD TO REDUCE MEMORY ACCESS LATENCIES USING SELECTIVE REPLICATION ACROSS MULTIPLE MEMORY PORTS - In one embodiment, a system comprises multiple memory ports distributed into multiple subsets, each subset identified by a subset index and each memory port having an individual wait time. The system further comprises a first address hashing unit configured to receive a read request including a virtual memory address associated with a replication factor, and referring to graph data. The first address hashing unit translates the replication factor into a corresponding subset index based on the virtual memory address, and converts the virtual memory address to a hardware based memory address that refers to graph data in the memory ports within a subset indicated by the corresponding subset index. The system further comprises a memory replication controller configured to direct read requests to the hardware based address to the one of the memory ports within the subset indicated by the corresponding subset index with a lowest individual wait time. | 04-25-2013 |
20130111130 | MEMORY INCLUDING A REDUCED LEAKAGE WORDLINE DRIVER | 05-02-2013 |
20130111131 | DYNAMICALLY ADJUSTED THRESHOLD FOR POPULATION OF SECONDARY CACHE | 05-02-2013 |
20130124799 | SELF-DISABLING WORKING SET CACHE - A method to monitor the behavior of a working set cache of a full data set at run time and determine whether it provides a performance benefit is disclosed. An effectiveness metric of the working set cache is tracked over a period of time by efficiently computing the amount of physical memory consumption the cache saves and comparing this to a straightforward measure of its overhead, if the effectiveness metric is determined to be on an ineffective side of a selected threshold amount, the working set cache is disabled. The working set cache can be re-enabled in response to a predetermined event. | 05-16-2013 |
20130132673 | STORAGE SYSTEM, STORAGE APPARATUS AND METHOD OF CONTROLLING STORAGE SYSTEM - A storage system enables a core storage apparatus to execute processing requiring securing of data consistency, while providing high write performance to a host computer. | 05-23-2013 |
20130145094 | Information Processing Apparatus and Driver - According to one embodiment, an information processing apparatus includes a memory that comprises a buffer area, a first external storage, a second external storage and a driver. The driver is configured to control the first and second external storages in units of predetermined blocks. The driver comprises a cache reservation module configured to (i) reserve a cache area in the memory, the cache area being logically between the buffer area and the first external storage and between the buffer area and the second external storage and (ii) manage the cache area. The cache area operates as a primary cache for the second external storage and a cache for the first external storage. Part or the entire first external storage is used as a secondary cache for the second external storage. The buffer area is used to transfer data between the driver and a host system that requests data reads/writes. | 06-06-2013 |
20130151773 | DETERMINING AVAILABILITY OF DATA ELEMENTS IN A STORAGE SYSTEM - Data elements are stored at a plurality of nodes. Each data element is a member data element of one of a plurality of layouts. Each layout indicates a unique subset of nodes. All member data elements of the layout are stored on each node in the unique subset of nodes. A stored dependency list includes every layout that has member data elements. The dependency list is used to determine availability of data elements based on ability to access data from nodes from the plurality of nodes. | 06-13-2013 |
20130151774 | Controlling a Storage System - A method, computer-readable storage medium and computer system for controlling a storage system, the storage system comprising a plurality of logical storage volumes, the method comprising: monitoring, for each of the logical storage volumes, one or more load parameters; receiving, for each of the logical storage volumes, one or more load parameter threshold values; comparing, for each of the logical storage volumes, the first load parameter values of said logical storage volume with the corresponding one or more load parameter threshold values; in case at least one of the first load parameter values of one of the logical storage volumes violates the load parameter threshold value it is compared with, automatically executing a corrective action. | 06-13-2013 |
20130151775 | Information Processing Apparatus and Driver - According to one embodiment, an information processing apparatus includes a memory includes a buffer area, a first storage, a second storage and a driver. Controlling the first and second external storages, the driver comprises a cache reservation module configured to reserve a cache area in the memory. The cache area is logically between the buffer area and the first external storage and between the buffer area and the second external storage. The driver being configured to use the cache area, secured on the memory by the cache reservation module, as a primary cache for the second external storage and a cache for the first external storage, and uses part or the entire first external storage as a secondary cache for the second external storage. The buffer area is reserved in order to transfer data between the driver and a host system that requests for data writing and data reading. | 06-13-2013 |
20130151776 | RAPID MEMORY BUFFER WRITE STORAGE SYSTEM AND METHOD - Efficient and convenient storage systems and methods are presented. In one embodiment a storage system includes a host for processing information, a memory controller and a memory. The memory controller controls communication of the information between the host and the memory, wherein the memory controller routes data rapidly to a buffer of the memory without buffering in the memory controller. The memory stores the information. The memory includes a buffer for temporarily storing the data while corresponding address information is determined. | 06-13-2013 |
20130159624 | STORING THE MOST SIGNIFICANT AND THE LEAST SIGNIFICANT BYTES OF CHARACTERS AT NON-CONTIGUOUS ADDRESSES - In an embodiment, an indicator is set to indicate that all of a plurality of most significant bytes of characters in a character array are zero. A first index and an input character are received. The input character comprises a first most significant byte and a first least significant byte. The first most significant byte is stored at a first storage location and the first least significant byte is stored at a second storage location, wherein the first storage location and the second storage location have non-contiguous addresses. If the first most significant byte does not equal zero, the indicator is set to indicate that at least one of a plurality of most significant bytes of the characters in the character array is non-zero. The character array comprises the first most significant byte and the first least significant byte. | 06-20-2013 |
20130159625 | INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD - An information processing device includes an internal memory which is capable of performing processing faster than an external memory, and a memory controller which controls data transfer between the internal memory and the external memory. The memory controller controls a first data transfer and a second data transfer. The first data transfer is a data transfer from the external memory to the internal memory, and the second data transfer is a data transfer from the internal memory to the external memory. The second data transfer transfers a part of the data area of the internal memory transferred in the first data transfer, and the data area which is read out in a non-continuous way from the internal memory is transferred in place to the external memory in the second data transfer. | 06-20-2013 |
20130166844 | STORAGE IN TIERED ENVIRONMENT FOR COLDER DATA SEGMENTS - Exemplary embodiments for storing data by a processor device in a computing environment are provided. In one embodiment, by way of example only, from a plurality of available data segments, a data segment having a storage activity lower than a predetermined threshold is identified as a colder data segment. A chunk of storage is located to which the colder data segment is assigned. The colder data segment is compressed. The colder data segment is migrated to the chunk of storage. A status of the chunk of storage is maintained in a compression data segment bitmap. | 06-27-2013 |
20130179637 | DATA STORAGE BACKUP WITH LESSENED CACHE POLLUTION - Control of the discard of data from cache during backup of the data. In a computer-implemented system comprising primary data storage; cache; backup data storage; and at least one processor, the processor is configured to identify data stored in the primary data storage for backup to the backup data storage, where the identified data is placed in the cache in the form of portions of the data, and where the portions of data are to be backed up from the cache to the backup storage. Upon backup of each portion of the identified data from the cache to the backup storage, the processor marks the backed up portion of the identified data for discard from the cache. Thus, the backed up data is discarded from the cache right away, lessening cache pollution. | 07-11-2013 |
20130179638 | Streaming Translation in Display Pipe - In an embodiment, a display pipe includes one or more translation units corresponding to images that the display pipe is reading for display. Each translation unit may be configured to prefetch translations ahead of the image data fetches, which may prevent translation misses in the display pipe (at least in most cases). The translation units may maintain translations in first-in, first-out (FIFO) fashion, and the display pipe fetch hardware may inform the translation unit when a given translation or translation is no longer needed. The translation unit may invalidate the identified translations and prefetch additional translation for virtual pages that are contiguous with the most recently prefetched virtual page. | 07-11-2013 |
20130185508 | SYSTEMS AND METHODS FOR MANAGING CACHE ADMISSION - A cache layer leverages a logical address space and storage metadata of a storage layer (e.g., virtual storage layer) to cache data of a backing store. The cache layer maintains access metadata to track data characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not in the cache. The access metadata may be separate and distinct from the storage metadata maintained by the storage layer. The cache layer determines whether to admit data into the cache using the access metadata. Data may be admitted into the cache when the data satisfies cache admission criteria, which may include an access threshold and/or a sequentiality metric. Time-ordered history of the access metadata is used to identify important/useful blocks in the logical address space of the backing store that would be beneficial to cache. | 07-18-2013 |
20130185509 | COMPUTING MACHINE MIGRATION - Systems and methods for migration between computing machines are disclosed. The source and target machines can be either physical or virtual; the source can also be a machine image. The target machine is connected to a snapshot or image of the source machine file system, and a redo-log file is created on the file system associated with the target machine. The target machine begins operation by reading data directly from the snapshot or image of the source machine file system. Thereafter, all writes are made to the redo-log file, and subsequent reads are made from the redo-log file if it contains data for the requested sector or from the snapshot or image if it does not. The source machine continues to be able to run separately and simultaneously after the target machine begins operation. | 07-18-2013 |
20130185510 | CACHING SOURCE BLOCKS OF DATA FOR TARGET BLOCKS OF DATA - Provided is a method for processing a read operation for a target block of data. A read operation for the target block of data in target storage is received, wherein the target block of data is in an instant virtual copy relationship with a source block of data in source storage. It is determined that the target block of data in the target storage is not consistent with the source block of data in the source storage. The source block of data is retrieved. The data in the source block of data in the cache is synthesized to make the data appear to be retrieved from the target storage. The target block of data is marked as read from the source storage. In response to the read operation completing, the target block of data that was read from the source storage is demoted. | 07-18-2013 |
20130191595 | METHOD AND APPARATUS FOR STORING DATA - Embodiments of the present invention provide a method and an apparatus for storing data, which relate to the field of data processing. In the present invention, a current device is divided into different load modes in the process of service processing, and manners of storing various data in a Cache are dynamically adjusted, so that nodes with different characteristics in the current device may control operations on the Cache, thus achieving lower power consumption and optimum performance of a large-capacity system under a heavy load. | 07-25-2013 |
20130191596 | ADJUSTMENT OF DESTAGE RATE BASED ON READ AND WRITE RESPONSE TIME REQUIREMENTS - A storage controller that includes a cache receives a command from a host, wherein a set of criteria corresponding to read and write response times for executing the command have to be satisfied. The storage controller determines ranks of a first type and ranks of a second type corresponding to a plurality of volumes coupled to the storage controller, wherein the command is to be executed with respect to the ranks of the first type. Destage rate corresponding to the ranks of the first type are adjusted to be less than a default destage rate corresponding to the ranks of the second type, wherein the set of criteria corresponding to the read and write response times for executing the command are satisfied. | 07-25-2013 |
20130198453 | HYBRID STORAGE DEVICE INCLUCING NON-VOLATILE MEMORY CACHE HAVING RING STRUCTURE - A storage device is provided. The storage device has a storage region configured in a ring structure, and is divided into a reading cache region and writing cache region, thereby reducing electricity consumption and increasing speed of the storage device. | 08-01-2013 |
20130198454 | CACHE DEVICE FOR CACHING - A cache device for caching scalable data structures in a cache memory exhibits a displacement strategy, in accordance with which scaling-down of one or more scalable files in the cache memory is provided for the purpose of freeing up storage space. | 08-01-2013 |
20130198455 | CACHE MEMORY GARBAGE COLLECTOR - A method for managing objects stored in a cache memory of a processing unit. The cache memory includes a set of entries corresponding to an object. The method includes: checking, for each entry of at least a subset of entries of the set of entries of the cache memory, whether an object corresponding to each entry includes one or more references to one or more other objects stored in the cache memory and storing the references; determining among the objects stored in the cache memory, which objects are not referenced by other objects, based on the stored references; marking entries as checked to distinguish entries corresponding to objects determined as being not referenced from other entries of the checked entries, and casting out, according to the marking, entries corresponding to objects determined as being not referenced. | 08-01-2013 |
20130198456 | Fast Cache Reheat - Embodiments of the present invention allow for fast cache reheat by periodically storing a snapshot of information identifying the contents of the cache at the time of the snapshot, and then using the information from the last snapshot to restore the contents of the cache following an event that causes loss or corruption of cache contents such as a loss of power or system reset. Since there can be a time gap between the taking of a snapshot and such an event, the actual contents of the cache, and hence the corresponding data stored in a data store, may have changed since the last snapshot was taken. Thus, the information stored at the last snapshot is used to retrieve current data from the data store for use in restoring the contents of the cache. | 08-01-2013 |
20130212331 | Techniques for Storing Data and Tags in Different Memory Arrays - A memory controller includes logic circuitry to generate a first data address identifying a location in a first external memory array for storing first data, a first tag address identifying a location in a second external memory array for storing a first tag, a second data address identifying a location in the second external memory array for storing second data, and a second tag address identifying a location in the first external memory array for storing a second tag. The memory controller includes an interface that transfers the first data address and the first tag address for a first set of memory operations in the first and the second external memory arrays. The interface transfers the second data address and the second tag address for a second set of memory operations in the first and the second external memory arrays. | 08-15-2013 |
20130219121 | METHOD AND APPARATUS FOR IMPLEMENTING A TRANSACTIONAL STORE SYSTEM USING A HELPER THREAD - A method, apparatus, and computer readable article of manufacture for executing a transaction by a processor apparatus that includes a plurality of hardware threads. The method includes the steps of: executing, by the processor apparatus using the plurality of hardware threads, a main software thread for executing the transaction and a helper software thread for executing a barrier function; and deciding, by the processor apparatus, whether or not the barrier function is required to be executed when the main software thread encounters a transactional load or store operation that requires the main software thread to read or write data. | 08-22-2013 |
20130227218 | Data Migration between Memory Locations - Migrating data may include determining to copy a first data block in a first memory location to a second memory location and determining to copy a second data block in the first memory location to the second memory location based on a migration policy. | 08-29-2013 |
20130227219 | PROCESSOR, INFORMATION PROCESSING APPARATUS, AND ARITHMETIC METHOD - An processor includes a cache memory that temporarily retains data stored in a main storage. The processor includes a processing unit that executes an application by using the data retained in the cache memory. The processor includes a storing unit that stores therein update information indicating data that has been updated by the processing unit within the time period specified by the application executed by the processing unit. The processor includes a write back unit that, when the time period specified by the application ends, writes back, to the main storage from the cache memory, data that is from among the data retained in the cache memory and that is indicated by the update information stored in the storing unit. | 08-29-2013 |
20130232303 | Method and Apparatus of Accessing Data of Virtual Machine - A methods and device for accessing virtual machine (VM) data are described. A computing device for accessing virtual machine comprises an access request process module, a data transfer proxy module and a virtual disk. The access request process module receives a data access request sent by a VM and adds the data access request to a request array. The data transfer proxy module obtains the data access request from the request array, maps the obtained data access request to a corresponding virtual storage unit, and maps the virtual storage unit to a corresponding physical storage unit of a distributed storage system. A corresponding data access operation may be performed based on a type of the data access request. | 09-05-2013 |
20130238855 | MANAGEMENT OF CACHE MEMORY IN A STORAGE SYSTEM - According to the teaching disclosed herein there is provided at least a method, system and device for managing a cache memory of a storage system. The storage system is associated with at least one physical storage device and, responsive to a read request, comprising information indicative of a logical address of at least one requested data unit, to obtain a storage physical address associated with the logical address, search the cache memory for a data unit associated with the storage physical address and service the request from the cache in case the data unit to is found in the cache memory. | 09-12-2013 |
20130238856 | System and Method for Cache Organization in Row-Based Memories - The present disclosure relates to a method and system for mapping cache lines to a row-based cache. In particular, a method includes, in response to a plurality of memory access requests each including an address associated with a cache line of a main memory, mapping sequentially addressed cache lines of the main memory to a row of the row-based cache. A disclosed system includes row index computation logic operative to map sequentially addressed cache lines of a main memory to a row of a row-based cache in response to a plurality of memory access requests each including an address associated with a cache line of the main memory. | 09-12-2013 |
20130238857 | DEVICE, SYSTEM AND METHOD OF CONTROLLING ACCESS TO LOCATION SOURCES - Some demonstrative embodiments include devices, systems and/or methods of controlling access to location sources. For example, a device may include a location caching controller to store cached location information in a cache based on location information retrieved from two or more location sources, to receive at least one location request from at least one application, to select between retrieving requested location information from at least one of the location sources and retrieving the requested location information from the cache, and to provide to the application a location response including the requested location information. | 09-12-2013 |
20130262765 | ARRAY CONTROLLER AND STORAGE SYSTEM - A storage system which includes a cache memory needless of replacement of a power storage device, a cache memory with low power consumption, or a cache memory having no limitation on the number of writing operations is provided. An array controller for storing data externally input in any of a plurality of storage devices or a storage system including the array controller includes a processor which specifies at least one of the plurality of storage devices where the data is to be stored and a cache memory which stores the data and outputs the data to the at least one of the plurality of storage devices. The cache memory includes a storage circuit in which a transistor including an oxide semiconductor layer is used. | 10-03-2013 |
20130268732 | SYSTEM AND METHOD FOR CACHE ACCESS - The rows of a cache are generally maintained in a low power state. In response to a memory access operation, the data processor predicts a plurality of cache rows that may be targeted by the operation, and transitions each of the plurality of cache rows to an active state to prepare them for access. The plurality of cache rows are predicted based on speculatively decoding a portion of a base address and a corresponding portion of an offset without performing a full addition of the portions. Because a full addition is not performed, the speculative decoding can be performed at sufficient speed to allow the set of rows to be transitioned to the active state before full decoding of the memory address is completed. The cache row associated with the memory address is therefore ready for access when decoding is complete, maintaining low latency for cache accesses. | 10-10-2013 |
20130275679 | LOADING A PRE-FETCH CACHE USING A LOGICAL VOLUME MAPPING - Methods, apparatus and computer program products implement embodiments of the present invention that include receiving a storage command from a host computer to retrieve first data from a specific physical region of a storage device, and responsively retrieving second data from one or more additional physical regions of the storage device based on a logical mapping managed by the host computer. The second data is conveyed to a cache. In some embodiments, the logical mapping is received from the host computer prior to receiving the storage command. In alternative embodiments, the logical mapping is retrieved from the storage device prior to receiving the storage command. | 10-17-2013 |
20130275680 | STORAGE APPARATUS AND DATA MANAGEMENT METHOD - Storage area assignment is specified in accordance with control modes for the cache memory. A storage apparatus which is connected via a network to a host which issues data I/O requests comprises storage devices of a plurality of types of varying performance, and a control unit which manages each storage area provided by each of the storage devices of a plurality of types by means of storage tiers of a plurality of different types, and which assigns storage areas in predetermined units to virtual volumes from any storage tier among the storage tiers of a plurality of types in accordance with a data write request from the host, wherein, if there is an I/O request from the host, the control unit stores data corresponding to the I/O request in predetermined units in the cache memory and determines the storage tier of the storage area assigned to the virtual volume storing the data according to the mode of writing to the cache memory. | 10-17-2013 |
20130282982 | METHOD AND APPARATUS TO MANAGE DATA LOCATION - Exemplary embodiments provide a management server that controls a storage subsystem based on the cache status on a server. In accordance with one aspect, a system comprises: a storage system operable to couple to a server and to manage a plurality of storage tiers, each of the plurality of storage tiers operable to store data sent from the server; and a management computer operable to manage a storage tier of the plurality of storage tiers for storing data based on whether the data is stored in a cache memory in the server or not. | 10-24-2013 |
20130290634 | Data Processing Method and Apparatus - Embodiments of the present invention disclose a data processing method and apparatus. The method includes: first receiving an operation command, then searching, according to a memory address, a Cache memory in a Cache controller for data to be operated, and storing the operation command in a missed command buffer area in the Cache controller when the data to be operated is not found through searching in the Cache memory; then, storing data sent by an external memory in a data buffer area of the Cache controller after sending a read command to the external memory, and finally processing, according to a missed command, the data acquired from the external memory and the data carried in the missed command. The present invention applies to the field of computer systems. | 10-31-2013 |
20130290635 | PROVISION OF ACCESS CONTROL DATA WITHIN A DATA PROCESSING SYSTEM - A data processing system ( | 10-31-2013 |
20130297874 | SEMICONDUCTOR DEVICE - To provide a semiconductor device with less power consumption. In a semiconductor device including a CPU, the frequency of access to a cache memory is monitored. In the case where the access frequency is uniform, supply of a power supply voltage to the CPU is stopped. In the case where the access frequency is not uniform, stop of supplying the power supply voltage is performed on memories with a time interval, and eventually, supply of the power supply voltage to the CPU is stopped. Further, write back processing is efficiently performed in accordance with determination of a dirty bit, so that power consumption of the semiconductor device can be further achieved. | 11-07-2013 |
20130304990 | Dynamic Control of Cache Injection Based on Write Data Type - Selective cache injection of write data generated or used by a coprocessor hardware accelerator in a multi-core processor system having a hierarchical bus architecture to facilitate transfer of address and data between multiple agents coupled to the bus. A bridge device maintains configuration settings for cache injection of write data and includes a set of n shared write data buffers used for write requests to memory. Each coprocessor hardware accelerator has m local write data cacheline buffers holding different types of write data. For write data produced by a coprocessor hardware accelerator, cache injection is accomplished based on configuration settings in a DMA channel dedicated to the coprocessor and a bridge controller. The access history of cache injected data for a particular processing thread or data flow is also tracked to determine whether to down grade or maintain a request for cache injection. | 11-14-2013 |
20130318299 | CHANGING POWER STATE WITH AN ELASTIC CACHE - An apparatus and associated method is provided employing data capacity determination logic. The logic dynamically changes a data storage capacity of an electronic data storage memory. The change in capacity is made in relation to a transient energy during a power state change sequence performed by the electronic data storage memory. | 11-28-2013 |
20130318300 | Byte Caching with Chunk Sizes Based on Data Type - Methods and apparatus are provided for performing byte caching using a chunk size based on the object type of the object being cached. Byte caching is performed by receiving at least one data packet from at least one network node; extracting at least one data object from the at least one data packet; identifying an object type associated with the at least one data packet; determining a chunk size associated with the object type; and storing at least a portion of the at least one data packet in a byte cache based on the determined chunk size. The chunk size of the object type can be determined, for example, by evaluating one or more additional criteria, such as network conditions and object size. The object type may be, for example, an image object type; an audio object type; a video object type; and a text object type. | 11-28-2013 |
20130339604 | Highly Scalable Storage Array Management with Reduced Latency - Systems and methods for increasing scalability and reducing latency in relation to managing large numbers of storage arrays of a storage network. Separate, dedicated, communication channels may be established between an array manager running on a server and each of a number of storage arrays for respectively performing reading and writing operations to limit the delays imposed by repeated array connection setup and teardown and improve array communication stability (e.g., as compared to performing read/write operations over the same array connection). The read connection can be used to maintain current state information (e.g., volumes, capacities, and the like) for a plurality of storage arrays in a local cache of the array manager that can be quickly accessed by the array manager, such as for presenting substantially current, summary-type state information of the various storage arrays to a user (e.g., upon the user requesting to configure a particular storage array). | 12-19-2013 |
20130339605 | UNIFORM STORAGE COLLABORATION AND ACCESS - A computerized method of collaborating storage space across multiple devices according to file usage patterns of the devices. The method comprises receiving access to a plurality of storage media each having a storage space and managed by at least one of a plurality of devices, identifying for each of the plurality of devices at least one usage pattern of at least one of a plurality of file types, creating a virtually contiguous storage pool mapping physical memory addresses of the plurality of storage media, setting a file distribution policy of storing each of a plurality of data files stored in the plurality of storage media in the virtually contiguous storage pool according to a match between a file type and at least one usage pattern, and collaborating storage space across the plurality of storage media managed by the plurality of devices according to the file distribution policy. | 12-19-2013 |
20130346692 | NON-BLOCKING DATA TRANSFER VIA MEMORY CACHE MANIPULATION - A cache controller in a computer system is configured to manage a cache such that the use of bus bandwidth is reduced. The cache controller receives commands from a processor. In response, a cache mapping maintaining information for each block in the cache is modified. The cache mapping may include an address, a dirty bit, a zero bit, and a priority for each cache block. The address indicates an address in main memory for which the cache block caches data. The dirty bit indicates whether the data in the cache block is consistent with data in main memory at the address. The zero bit indicates whether data at the address should be read as a default value, and the priority specifies a priority for evicting the cache block. By manipulating this mapping information, commands such as move, copy swap, zero, deprioritize and deactivate may be implemented. | 12-26-2013 |
20140006711 | METHOD, SYSTEM, AND DEVICE FOR MODIFYING A SECURE ENCLAVE CONFIGURATION WITHOUT CHANGING THE ENCLAVE MEASUREMENT | 01-02-2014 |
20140006712 | SYSTEMS AND METHODS FOR FINE GRANULARITY MEMORY SPARING | 01-02-2014 |
20140025889 | METHODS AND SYSTEMS FOR USING STATE VECTOR DATA IN A STATE MACHINE ENGINE - A state machine engine includes a state vector system. The state vector system includes an input buffer configured to receive state vector data from a restore buffer and to provide state vector data to a state machine lattice. The state vector system also includes an output buffer configured to receive state vector data from the state machine lattice and to provide state vector data to a save buffer. | 01-23-2014 |
20140025890 | METHODS AND STRUCTURE FOR IMPROVED FLEXIBILITY IN SHARED STORAGE CACHING BY MULTIPLE SYSTEMS OPERATING AS MULTIPLE VIRTUAL MACHINES - Methods and structure for improved flexibility in managing cache memory in a storage controller of a computing device on which multiple virtual machines (VMs) are operating in a VM computing environment. Embodiments hereof provide for the storage controller to receive configuration information from a VM management system coupled with the storage controller where the configuration information comprises information regarding each VM presently operating on the computing device. Based on the configuration information, the storage controller allocates and de-allocates segments of the cache memory of the storage controller for use by the various virtual machines presently operating on the computing device. The configuration information may comprise indicia of the number of VMs presently operating as well as performance metric threshold configuration information to allocate/de-allocate segments based on present performance of each virtual machine. | 01-23-2014 |
20140040550 | MEMORY CHANNEL THAT SUPPORTS NEAR MEMORY AND FAR MEMORY ACCESS - A semiconductor chip comprising memory controller circuitry having interface circuitry to couple to a memory channel. The memory controller includes first logic circuitry to implement a first memory channel protocol on the memory channel. The first memory channel protocol is specific to a first volatile system memory technology. The interface also includes second logic circuitry to implement a second memory channel protocol on the memory channel. The second memory channel protocol is specific to a second non volatile system memory technology. The second memory channel protocol is a transactional protocol. | 02-06-2014 |
20140047181 | System and Method for Updating Data in a Cache - In one embodiment, a computing system includes a cache having one or more memories and a cache manager. The cache manager is able to receive a request to write data to a first portion of the cache, write the data to the first portion of the cache, update a first map corresponding to the first portion of the cache, receive a request to read data from the first portion of the cache, read from a storage communicatively linked to the computing system data according to the first map, and update a second map corresponding to the first portion of the cache. The cache manager may also be able to write data to the storage according to the first map. | 02-13-2014 |
20140047182 | METHOD AND DEVICE FOR PROCESSSING DATA USING WINDOW - Provided are a data processing method and device using a window. The data processing method may include caching data by applying a window to data stored in a memory on a per channel basis, and transmitting the cached data to a core processor using location information of a point. | 02-13-2014 |
20140052911 | HYBRID CACHING SYSTEM - A system operable to: receive a request for an application unit from a first device; generating a key for the application unit; look up segment cache indices corresponding to the application unit, according to the key; and determine whether the segment cache indices are available. Where the segment cache indices are available, the system may retrieve a segment cache using the segment cache indices; and then retrieve the application unit using the retrieved segment cache. Otherwise, where the segment cache indices are not available, the system may communicate the request to a second device to receive a response from the second device including the segment indices. Further, the system may receive the response from the second device; store a segment index sequence for the application unit in an application optimizer cache based on the response; and retrieve the application unit via the segment index sequence. | 02-20-2014 |
20140052912 | MEMORY DEVICE WITH A LOGICAL-TO-PHYSICAL BANK MAPPING CACHE - A memory device with a logical-to-physical (LTP) bank mapping cache that supports multiple read and write accesses is described herein. The memory device allows for at least one read operation and one write operation to be received during the same clock cycle. In the event that the incoming write operation is not blocked by the at least one read operation, data for that incoming write operation may be stored in the physical memory bank corresponding to a logical memory bank that is associated with the incoming write operation. In the event that the incoming write operation is blocked by the at least one read operation, then data for that incoming write operation may be stored in an unmapped physical bank that is not associated with any logical memory bank. | 02-20-2014 |
20140068186 | METHODS AND APPARATUS FOR DESIGNATING OR USING DATA STATUS INDICATORS - Memory devices and methods facilitate handling of data received by a memory device through the use of data grouping and assignment of data validity status values to grouped data. For example, data is received and delineated into one or more data groups and a data validity status is associated with each data group. Data groups having a valid status are latched into one or more cache registers for storage in an array of memory cells wherein data groups comprising an invalid status are rejected by the one or more cache registers. | 03-06-2014 |
20140068187 | IMAGE PROCESSING APPARATUS, CONTROL METHOD FOR IMAGE PROCESSING APPARATUS, AND STORAGE MEDIUM - An image processing apparatus that schedules and executes a process in response to a request for job processing includes a detection unit configured to detect a process which requests backing up of management information to be managed in the job processing, a setting unit configured to set, in a case where a process requesting data backup is detected, a caching destination to which management information requested to be backed up is to be cached to a volatile memory or a non-volatile memory based on a data amount of the management information requested to be backed up, and a cache unit configured to cache the management information in the set caching destination. | 03-06-2014 |
20140068188 | SYSTEM AND METHOD FOR MANAGING AN OBJECT CACHE - In order to optimize efficiency of deserialization, a serialization cache is maintained at an object server. The serialization cache is maintained in conjunction with an object cache and stores serialized forms of objects cached within the object cache. When an inbound request is received, a serialized object received in the request is compared to the serialization cache. If the serialized byte stream is present in the serialization cache, then the equivalent object is retrieved from the object cache, thereby avoiding deserialization of the received serialized object. If the serialized byte stream is not present in the serialization cache, then the serialized byte stream is deserialized, the deserialized object is cached in the object cache, and the serialized object is cached in the serialization cache. | 03-06-2014 |
20140068189 | ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress. | 03-06-2014 |
20140068190 | STACKED MEMORY DEVICES, SYSTEMS, AND METHODS - Memory requests for information from a processor are received in an interface device, and the interface device is coupled to a stack including two or more memory devices. The interface device is operated to select a memory device from a number of memory devices including the stack, and to retrieve some or all of the information from the selected memory device for the processor. Additional apparatus, systems and methods are disclosed. | 03-06-2014 |
20140075117 | DISPLAY PIPE ALTERNATE CACHE HINT - A system and method for efficiently allocating data in a memory hierarchy. A system includes a memory controller for controlling accesses to a memory and a display controller for processing video frame data. The memory controller includes a cache capable of storing data read from the memory. A given video frame may be processed by the display controller and presented on a respective display screen. During processing, control logic within the display controller sends multiple memory access requests to the memory controller with cache hint information. For the frame data, the cache hint information may alternate between (i) indicating to store frame data read in response to respective requests in the memory cache and (ii) indicating to not store the frame data read in response to respective requests in the memory cache. | 03-13-2014 |
20140075118 | SYSTEM CACHE WITH QUOTA-BASED CONTROL - Methods and apparatuses for implementing a system cache with quota-based control. Quotas may be assigned on a group ID basis to each group ID that is assigned to use the system cache. The quota does not reserve space in the system cache, but rather the quota may be used within any way within the system cache. The quota may prevent a given group ID from consuming more than a desired amount of the system cache. Once a group ID's quota has been reached, no additional allocation will be permitted for that group ID. The total amount of allocated quota for all group IDs can exceed the size of system cache, such that the system cache can be oversubscribed. The sticky state can be used to prioritize data retention within the system cache when oversubscription is being used. | 03-13-2014 |
20140075119 | HYBRID ACTIVE MEMORY PROCESSOR SYSTEM - In general, the present invention relates to data cache processing. Specifically, the present invention relates to a system that provides reconfigurable dynamic cache which varies the operation strategy of cache memory based on the demand from the applications originating from different external general processor cores, along with functions of a virtualized hybrid core system. The system includes receiving a data request, selecting an operational mode based on the data request and a predefined selection algorithm, and processing the data request based on the selected operational mode. The present invention is further configured to enable processing core and memory utilization by external systems through virtualization. | 03-13-2014 |
20140082283 | EFFICIENT CACHE VOLUME SIT SCANS - A processor, operable in a computing storage environment, allocates portions of a Scatter Index Table (SIT) disproportionately between a larger portion dedicated for meta data tracks, and a smaller portion dedicated for user data tracks, and processes a storage operation through the disproportionately allocated portions of the SIT using an allocated number of Task Control Blocks (TCB). | 03-20-2014 |
20140089584 | ACCELERATED PATH SELECTION BASED ON NUMBER OF WRITE REQUESTS AND SEQUENTIAL TREND - Embodiments herein relate to selecting an accelerated path based on a number of write requests and a sequential trend. One of an accelerated path and a cache path is selected between a host and a storage device based on at least one of a number of write requests and a sequential trend. The cache path connects the host to the storage device via a cache. The number of write requests is based on a total number of random and sequential write requests from a set of outstanding requests from the host to the storage device. The sequential trend is based on a percentage of sequential read and sequential write requests from the set of outstanding requests. | 03-27-2014 |
20140089585 | HIERARCHY MEMORY MANAGEMENT - In one embodiment, a storage system comprises: a first type interface being operable to communicate with a server using a remote memory access; a second type interface being operable to communicate with the server using a block I/O (Input/Output) access; a memory; and a controller being operable to manage (1) a first portion of storage areas of the memory to allocate for storing data, which is to be stored in a physical address space managed by an operating system on the server and which is sent from the server via the first type interface, and (2) a second portion of the storage areas of the memory to allocate for caching data, which is sent from the server to a logical volume of the storage system via the second type interface and which is to be stored in a storage device of the storage system corresponding to the logical volume. | 03-27-2014 |
20140101386 | DATA STORAGE DEVICE INCLUDING BUFFER MEMORY - A data storage device includes a data storage medium a micro control unit (MCU) connected to a host through a first interface method and configured to control the data storage medium in response to a request of the host; and a buffer memory connected to the host through a second interface method, connected to the MCU, and controlled by the MCU and the host, respectively. | 04-10-2014 |
20140108727 | STORAGE APPARATUS AND DATA PROCESSING METHOD - To raise the CPU cache hit rate and improve the I/O processing. Controller is CPU configured from a CPU core and a CPU cache wherein the CPU selects memory bus optimization execution processing or cache poisoning optimization execution processing according to an attribute of the access target volume on the basis of an access request. If the memory bus optimization execution processing is selected, CPU loads the target data into the CPU core after storing the target data in the main storage area, and if the cache poisoning optimization execution processing is selected, the CPU loads the target data into the CPU core after storing the target data in the temporary area of the CPU cache from the CPU memory, and the CPU core checks the target data which was loaded from the main storage area or the temporary area of the CPU cache. | 04-17-2014 |
20140108728 | MANAGING A LOCK TO A RESOURCE SHARED AMONG A PLURALITY OF PROCESSORS - Provided are a computer program product, system, and method for managing a lock to a resource shared among a plurality of processors. Slots in a memory implement the lock on the shared resource. The slots correspond to counter values that are consecutively numbered and indicate one of busy and free. A requesting processor fetches a counter value comprising a fetched counter value. A determination is made as to whether the slot corresponding to the fetched counter value indicates free. A processor identifier of the requesting processor is inserted into the slot corresponding to the fetched counter value in response to determining that the slot corresponding to the fetched counter value indicates not free. The requesting processor accesses the shared resource in response to determining that the slot corresponding to the fetched counter value indicates free. | 04-17-2014 |
20140122801 | MEMORY CONTROLLER WITH INTER-CORE INTERFERENCE DETECTION - Embodiments are described for a method for controlling access to memory in a processor-based system comprising monitoring a number of interference events, such as bank contentions, bus contentions, row-buffer conflicts, and increased write-to-read turnaround time caused by a first core in the processor-based system that causes a delay in access to the memory by a second core in the processor-based system; deriving a control signal based on the number of interference events; and transmitting the control signal to one or more resources of the processor-based system to reduce the number of interference events from an original number of interference events. | 05-01-2014 |
20140122802 | ACCESSING AN OFF-CHIP CACHE VIA SILICON PHOTONIC WAVEGUIDES - The disclosed embodiments provide a system in which a processor chip accesses an off-chip cache via silicon photonic waveguides. The system includes a processor chip and a cache chip that are both coupled to a communications substrate. The cache chip comprises one or more cache banks that receive cache requests from a structure in the processor chip optically via a silicon photonic waveguide. More specifically, the silicon photonic waveguide is comprised of waveguides in the processor chip, the communications substrate, and the cache chip, and forms an optical channel that routes an optical signal directly from the structure to a cache bank in the cache chip via the communications substrate. Transmitting optical signals from the processor chip directly to cache banks on the cache chip facilitates reducing the wire latency of cache accesses and allowing each cache bank on the cache chip to be accessed with uniform latency. | 05-01-2014 |
20140122803 | INFORMATION PROCESSING APPARATUS AND METHOD THEREOF - Data representing the storage state of the main memory of an information processing device is saved in a secondary storage device. The data saved in the secondary storage device is transferred to the main memory in reactivation of the information processing device to restore the storage state of the main memory. A cache allocated in the main memory is deallocated before generating data to be saved. | 05-01-2014 |
20140143491 | SEMICONDUCTOR APPARATUS AND OPERATING METHOD THEREOF - A semiconductor apparatus includes stacked memory dies; a controller configured to control the memory dies; and a base die configured to electrically connect the memory dies and the controller. The base die includes a control unit configured to receive an external address, a request and external data from the controller; a memory input interface configured to receive a memory control signal for controlling the memory dies, from the control unit, and first cache data, and output an internal address, an internal command and internal data to the memory dies; a write cache memory configured to receive a cache control signal and transfer data from the control unit, output the first cache data to the memory input interface, and output second cache data to a memory output interface; and the memory output interface configured to output the second cache data and stored data inputted from the memory dies, to the controller. | 05-22-2014 |
20140143492 | Using Predictions for Store-to-Load Forwarding - The described embodiments include a core that uses predictions for store-to-load forwarding. In the described embodiments, the core comprises a load-store unit, a store buffer, and a prediction mechanism. During operation, the prediction mechanism generates a prediction that a load will be satisfied using data forwarded from the store buffer because the load loads data from a memory location in a stack. Based on the prediction, the load-store unit first sends a request for the data to the store buffer in an attempt to satisfy the load using data forwarded from the store buffer. If data is returned from the store buffer, the load is satisfied using the data. However, if the attempt to satisfy the load using data forwarded from the store buffer is unsuccessful, the load-store unit then separately sends a request for the data to a cache to satisfy the load. | 05-22-2014 |
20140173200 | NON-BLOCKING CACHING TECHNIQUE - The described implementations relate to processing of electronic data. One implementation is manifested as a system that can include a cache module and at least one processing device configured to execute the cache module. The cache module can be configured to store data items in slots of a cache structure, receive a request for an individual data item that maps to an individual slot of the cache structure, and, when the individual slot of the cache structure is not available, return without further processing the request. For example, the request can be received from a calling application or thread that can proceed without blocking irrespective of whether the request is fulfilled by the cache module. | 06-19-2014 |
20140173201 | ACQUIRING REMOTE SHARED VARIABLE DIRECTORY INFORMATION IN A PARALLEL COMPUTER - Methods, parallel computers, and computer program products for acquiring remote shared variable directory (SVD) information in a parallel computer are provided. Embodiments include a runtime optimizer determining that a first thread of a first task requires shared resource data stored in a memory partition corresponding to a second thread of a second task. Embodiments also include the runtime optimizer requesting from the second thread, in response to determining that the first thread of the first task requires the shared resource data, SVD information associated with the shared resource data. Embodiments also include the runtime optimizer receiving from the second thread, the SVD information associated with the shared resource data. | 06-19-2014 |
20140173202 | INFORMATION PROCESSING APPARATUS AND SCHEDULING METHOD - An information processing apparatus includes: at least one access unit that issues a memory access request for a memory; an arbitration unit that arbitrates the memory access request issued from the access unit; a management unit that allows the access unit that is an issuance source of the memory access request according to a result of the arbitration made by the arbitration unit to perform a memory access to the memory; a processor that accesses the memory through at least one cache memory; and a timing adjusting unit that holds a process relating to the memory access request issued by the access unit for a holding time set in advance and cancels the holding of the process relating to the memory access request in a case where power of the at least one cache memory is turned off in the processor before the holding time expires. | 06-19-2014 |
20140181401 | METHOD AND APPARATUS FOR QUERYING FOR AND TRAVERSING VIRTUAL MEMORY AREA - Embodiments of the present invention disclose a method and an apparatuses for querying for and traversing a virtual memory area. The method includes: determining whether a virtual memory area (vma) corresponding to a query address is in an adjacent range of a cached vma, and if the vma corresponding to the query address is in the adjacent range of the cached vma, querying for the vma by using a thread on a node of a threaded red-black tree. Since an adjacent range of the cached vma can always be determined, the hit rate of accessing the cache is improved, and the time complexity of implementing the whole vma traversal is O(n), thereby improving vma query efficiency. | 06-26-2014 |
20140189237 | DATA PROCESSING METHOD AND APPARATUS - Embodiments of the present invention provide a data processing method and apparatus. According to the embodiments of the present invention, when it is found that a data hash value in a currently received data stream exceeds a preset first threshold, a part or all of data in the data stream is not deduplicated, and is directly stored, so as to prevent the data in the data stream from being dispersedly stored into a plurality of storage areas; instead, the part or all of the data is stored into a storage area in a centralized manner, so that a deduplication rate is effectively improved on the whole, particularly in a scenario of large data storage amount. | 07-03-2014 |
20140201441 | Surviving Write Errors By Using Copy-On-Write To Another System - In one embodiment, a method may include performing a copy-on-write in response to a write error from a first system, where the copy-on-write copies to a second system. The method may further include receiving a write request at the first system from a third system. The method may additionally include storing the data from the write request in a cache. The method may also include reporting successful execution of the write request. The method may further include writing data from the write request to a drive in the first system. The method may additionally include receiving the write error from the drive. In an additional embodiment, performing the copy-on-write may use the data stored in the cache. | 07-17-2014 |
20140208026 | INITIALIZATION OF A STORAGE DEVICE - A storage device including a first storage unit including a first media of a first type, a second storage unit including a second media of a second type, and a controller. The controller initializes the storage device for a host by receiving an initialization query from the host, identifying, to the host, that the storage device comprises the second storage unit but not the first storage unit, receiving an indication from the host indicating that the host is compatible with the first storage unit, and identifying, to the host, that the storage device comprises the first storage unit and the second storage unit. The host initializes the storage device by initializing the second storage unit, transmitting the indication to the controller indicating that the host is compatible with the first storage unit, receiving the identification of the first storage unit from the controller, and initializing the first storage unit. | 07-24-2014 |
20140208027 | CONFIGURABLE CACHE AND METHOD TO CONFIGURE SAME - A method includes receiving an address at a tag state array of a cache, wherein the cache is configurable to have a first size and a second size that is smaller than the first size. The method further includes identifying a first portion of the address as a set index, wherein the first portion has a same number of bits when the cache has the first size as when the cache has the second size. The method further includes using the set index to locate at least one tag field of the tag state array, identifying a second portion of the address to compare to a value stored at the at least one tag field, locating at least one state field of the tag state array that is associated with a particular tag field that matches the second portion, identifying a cache line based on a comparison of a third portion of the address to at least one status bit of the at least one state field when the cache has the second size, and retrieving the cache line. | 07-24-2014 |
20140208028 | EXTENDER STORAGE POOL SYSTEM - Various embodiments for managing data in a computing storage environment by a processor device are provided. In one such embodiment, by way of example only, an extender storage pool system is configured for at least one of a source and a target storage pool to expand an available storage capacity for the at least one of the source and the target storage pool. A most recent snapshot of the data is sent to the extender storage pool system. The most recent snapshot of the data is stored on the extender storage pool system as a last replicated snapshot of the data. | 07-24-2014 |
20140223098 | DYNAMIC MANAGEMENT OF HETEROGENOUS MEMORY - A method of operating a computing device includes dynamically managing at least two types of memory based on workloads, or requests from different types of applications. A first type of memory may be high performance memory that may have a higher bandwidth, lower memory latency and/or lower power consumption than a second type of memory in the computing device. In an embodiment, the computing device includes a system on a chip (SoC) that includes Wide I/O DRAM positioned with one or more processor cores. A Low Power Double Data Rate 3 dynamic random access memory (LPDDR3 DRAM) memory is externally connected to the SoC or is an embedded part of the SoC. In embodiments, the computing device may be included in at least a cell phone, mobile device, embedded system, video game, media console, laptop computer, desktop computer, server and/or datacenter. | 08-07-2014 |
20140223099 | CONTENT MANAGEMENT PLATFORM APPARATUS, METHODS, AND SYSTEMS - The CONTENT MANAGEMENT PLATFORM APPARATUSES, METHODS AND SYSTEMS (“CMP”) transform content seed selections and recommendations via CMP components such as discovery and social influence into events and discovery of other contents for users and revenue for right-holders. In one embodiment, the CMP may obtain content discovery supportive information for a universally resolvable user. The CMP may then determine apportionment heuristics among the obtained information for the user. In one implementation, the CMP may identify a first set of universally resolvable content items based on the determined apportionment heuristics and may create a caching queue that includes the identified first set of universally resolvable content items. The first set of universally resolvable content items in the caching queue may then be provided to the user. | 08-07-2014 |
20140223100 | RANGE BASED COLLECTION CACHE - A system enables older cached data to be kept against the same key while adding in new sets of data in the cache that have the affected dimensional changes. Set membership functions such as intersection and difference may be used on each dimension of the data to derive the correct range for which partition the data must belong to. Each range-based, partitioned set in the cache that is against the same key is mutually exclusive with another range-based, partitioned set for the same key. With ranged-based, partitioned set, a key can be queried to find out which sets are already stored and which sets may need to be stored. This approach allows the caching to be served longer when there are queries that are only interested in subsets of the data. | 08-07-2014 |
20140244930 | ELECTRONIC DEVICES HAVING SEMICONDUCTOR MAGNETIC MEMORY UNITS - An electronic device comprising a semiconductor memory unit that includes a resistance variable element configured to be changed in a resistance value according to a value of data stored therein; a first reference resistance element having a first resistance value; a second reference resistance element having a second resistance value larger than the first resistance value; and a comparison unit configured to receive a voltage corresponding to the resistance value of the resistance variable element through a first input terminal and a second input terminal thereof, a voltage corresponding to the first resistance value of the first reference resistance element through a third input terminal, and a voltage corresponding to the second resistance value of the second reference resistance element through a fourth input terminal, the comparison unit configured to output a result of comparing inputs to the first input terminal and the second input terminal and inputs to the third input terminal and fourth input terminal. | 08-28-2014 |
20140244931 | ELECTRONIC DEVICE - An electronic device comprising a semiconductor memory unit that may include a cell array including a plurality of storage cells; a first line connected to one ends of the plurality of storage cells; a second line connected to the other ends of the plurality of storage cells; a first driver connected to one end of the first line at a first contact location on one side of the cell array, and configured to apply a first electrical signal to the one end of the first line; and a second driver connected to one end of the second line at a second contact location on a side of the cell array opposing the side of the cell array where the first contact location is located, and configured to apply a second electrical signal to the one end of the second line. | 08-28-2014 |
20140250272 | SYSTEM AND METHOD FOR FETCHING DATA DURING READS IN A DATA STORAGE DEVICE - A controller for a data storage device that includes a cache memory and a non-volatile solid state memory is configured to fetch data from the non-volatile solid state memory in response to a read command, conditionally fetch additional data from the non-volatile solid state memory in response to the read command, and then store some or all of the fetched data in the cache memory. The condition for additional data fetch is met when it is determined that a sequence of N (where N is two or more) most recent read commands is requesting data from a successively increasing and consecutive address range. The additional data fetch speeds up subsequent reads, especially when the requested data sizes are relatively small. When the requested data sizes are larger, improvements in read speeds can be achieved if the time between the large reads are well spaced. | 09-04-2014 |
20140258618 | MULTI LATENCY CONFIGURABLE CACHE - Described herein are technologies for optimizing different cache configurations of a size-configurable cache. One configuration includes a base cache portion and a removable cache portion, each with different latencies. The latency of the base cache portion is modified to correspond to the latency of the removable portion. | 09-11-2014 |
20140258619 | APPARATUSES AND METHODS FOR A MEMORY DIE ARCHITECTURE - Apparatuses and methods for reducing capacitance on a data bus are disclosed herein. In accordance with one or more described embodiments, an apparatus may comprise a plurality of memories coupled to an internal data bus and a command and address bus, each of the memories configured to receive a command on the command and address bus. One of the plurality of memories may be coupled to an external data bus. The one of the plurality of memories may be configured to provide program data to the internal data bus when the command comprises a program command and another of the plurality of memories is a target memory of the program command and may be configured to provide read data to the external data bus when the command comprises a read command and the another of the plurality of memories is a target memory of the read command. | 09-11-2014 |
20140281228 | System and Method for an Accelerator Cache Based on Memory Availability and Usage - The storage processor of a data storage system such as a storage array automatically configures one or more accelerator caches (“AC”) upon detecting the presence of one or more solid-state storage devices (e.g., SSD drives) installed in the data storage system, such as when a storage device is plugged into a designated slot of the data storage system, without requiring any user configuration of the AC or specification by the user of the type(s) of data to be cached in the AC. The AC therefore provides a zero configuration cache that can be used to cache any of various types of data in the data storage system. The AC cache can be used in any of a wide variety of data storage systems including, without limitation, file servers, storage arrays, computers, etc. Multiple ACs may be created to cache different types of data. | 09-18-2014 |
20140281229 | DYNAMIC STORAGE DEVICE PROVISIONING - A method or system for allocating the storage space of a storage medium into a permanently allocated media cache storage region; a dynamically mapped media cache storage region, and statically mapped storage region, wherein the dynamically mapped media cache storage region is used for performance and/or reliability enhancing functions. | 09-18-2014 |
20140281230 | CACHE MANAGEMENT IN MANAGED RUNTIME ENVIRONMENTS - Methods and apparatus to provide cache management in managed runtime environments are described. In one embodiment, a controller comprises logic to determine an update frequency for an object in the runtime environment and assigning the object to an unshared cache line when the update frequency exceeds an update frequency threshold. Other embodiments are also described. | 09-18-2014 |
20140281231 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME - This technology relates to an electronic device and a method for fabricating the same. An electronic device in accordance with this technology includes semiconductor memory. The semiconductor memory may include a magnetization-pinned layer configured to include a first magnetic layer, a second magnetic layer, and a non-magnetic layer interposed between the first magnetic layer and the second magnetic layer, a free magnetization layer spaced apart from the magnetization-pinned layer, a tunnel barrier layer interposed between the magnetization-pinned layer and the free magnetization layer, and a magnetic spacer configured to come in contact with a side of the first magnetic layer and at least part of a side of the second magnetic layer. | 09-18-2014 |
20140289467 | CACHE MISS DETECTION FILTER - Systems and methods are provided that facilitate cache miss detection in an electronic device. The system contains a probabilistic filter coupled to the processing device. A probing component determines existence of an entry associated with a request. The probing component can communicate a miss token without the need to query a cache. Accordingly, power consumption can be reduced and electronic devices can be more efficient. | 09-25-2014 |
20140297954 | CACHING MECHANISM TO IMPROVE USER INTERFACE RESPONSIVENESS - A method to preferentially wait for fresh data from a primary source to become available in a system where there is also an older set of data from a secondary source. The method includes receiving a data request that is to be displayed and determining if the data from the primary source is available. If the primary source is not available, a dynamic threshold value is tested to detect if a wait time for access to the primary source is exceeded. If the wait time for access to the primary source is exceeded, then older data from the secondary source instead of the primary source is acquired. The dynamic threshold includes an elapsed time since receipt of the request as measured from a receipt time of a prior request. | 10-02-2014 |
20140297955 | STORAGE CONTROL DEVICE AND CONTROL METHOD - The first storage area stores original data of an update target that is to be updated by a host. The controller divides data to be written over the original data of the update target stored in the first storage area into a plurality of pieces of update data and thereby distributes the plurality of pieces of update data for each of successive addresses. The second storage area stores the plurality of update data distributed by the controller. The third storage area stores information in which an update area address, which is an address of the first storage area to be overwritten by the plurality of pieces of update data of the original data of the update target, is associated with a storage destination address, which is an address of the second storage area that has stored the plurality of pieces of update data. | 10-02-2014 |
20140304471 | DEMOTE INSTRUCTION FOR RELINQUISHING CACHE LINE OWNERSHIP - A computer system microprocessor core having a cache subsystem executes a demote instruction to cause an exclusively owned demote instruction specified cache line owned by the same microprocessor core to be shared or read-only. | 10-09-2014 |
20140304472 | LOGICAL SECTOR MAPPING IN A FLASH STORAGE ARRAY - A system and method for efficiently performing user storage virtualization for data stored in a storage system including a plurality of solid-state storage devices. A data storage subsystem supports multiple mapping tables. Records within a mapping table are arranged in multiple levels. Each level stores pairs of a key value and a pointer value. The levels are sorted by time. New records are inserted in a created newest (youngest) level. No edits are performed in-place. All levels other than the youngest may be read only. The system may further include an overlay table which identifies those keys within the mapping table that are invalid. | 10-09-2014 |
20140310461 | OPTIMIZED AND PARALLEL PROCESSING METHODS WITH APPLICATION TO QUERY EVALUATION - Methods of computing the results of logical operations on large sets are described which coax a processor to utilize efficiently processor caches and thereby reduce the latency of the results. The methods are particularly useful in parallel processing systems. Such computations can improve the evaluation of queries, particularly queries in faceted navigation and TIE systems. | 10-16-2014 |
20140310462 | Cache Allocation System and Method Using a Sampled Cache Utility Curve in Constant Space - Cache utility curves are determined for different software entities depending on how frequently their storage access requests lead to cache hits or cache misses. Although possible, not all access requests need be tested, but rather only a sampled subset, determined by whether a hash value of each current storage location identifier (such as an address or block number) meets one or more sampling criteria. The sampling rate is adaptively changed so as to hold the number of location identifiers needed to be stored to compute the cache utility curves to within a set maximum limit. | 10-16-2014 |
20140310463 | SYSTEMS AND METHODS FOR TRACKING WORKING-SET ESTIMATES WITH A LIMITED RESOURCE BUDGET - Embodiments of the systems and techniques described here can leverage several insights into the nature of workload access patterns and the working-set behavior to reduce the memory overheads. As a result, various embodiments make it feasible to maintain running estimates of a workload's cacheability in current storage systems with limited resources. For example, some embodiments provide for a method comprising estimating cacheability of a workload based on a first working-set size estimate generated from the workload over a first monitoring interval. Then, based on the cacheability of the workload, a workload cache size can be determined. A cache then can be dynamically allocated (e.g., change, possibly frequently, the cache allocation for the workload when the current allocation and the desired workload cache size differ), within a storage system for example, in accordance with the workload cache size. | 10-16-2014 |
20140331010 | SOFTWARE PERFORMANCE BY IDENTIFYING AND PRE-LOADING DATA PAGES - Embodiments relate to methods, computer systems and computer program products for improving software performance by identifying and preloading data pages. Embodiments include executing an instruction that requests a data page from the one or more auxiliary storage devices. Based on determining that the instruction is present in the long-running instruction list, embodiments include examining one or more characteristics of a plurality of data pages that will be requested by the instruction. Based on determining that the plurality of data pages are located on a single auxiliary storage device and that the plurality of data pages can be efficiently retrieved by the single auxiliary storage device, embodiments include initiating a pre-load operation to move the plurality of data pages to the main memory. | 11-06-2014 |
20140344519 | DISTORTION CANCELLATION IN 3-D NON-VOLATILE MEMORY - A method in a memory that includes multiple analog memory cells arranged in a three-dimensional (3-D) configuration, includes identifying multiple groups of potentially-interfering memory cells that potentially cause interference to a group of target memory cells. Partial distortion components, which are inflicted by the respective groups of the potentially-interfering memory cells on the target memory cells, are estimated. The partial distortion components are progressively accumulated so as to produce an estimated composite distortion affecting the target memory cells, while retaining only the composite distortion and not the partial distortion components. The target memory cells are read, and the interference in the target memory cells is canceled based on the estimated composite distortion. | 11-20-2014 |
20140344520 | SYSTEM FOR CACHING DATA - A system for caching data in a distributed data processing system allows for the caching of user-modifiable data (as well as other types of data) across one or multiple entities in a manner that prevents stale data from being improperly used. | 11-20-2014 |
20140359219 | Cache Memory Controller for Accelerated Data Transfer - A cache memory controller in a computer system, such as a multicore processing system, provides compression for writes to memory, such as an off-chip memory, and decompression after reads from memory. Application accelerator processors in the system generate application data and requests to read/write the data from/to memory. The cache memory controller uses information relating location parameters of buffers allocated for application data and sets parameters to configure compression and decompression operations. The cache memory controller monitors memory addresses specified in read requests and write requests from/to the first memory. The requested memory address is compared to the location parameters for the allocated buffers to select the set of parameters for the particular application data. Compression or decompression is applied to the application data in accordance with the selected set of parameters. The data size of the data transferred to/from memory is reduced. | 12-04-2014 |
20140372699 | TRANSLATING CACHE HINTS - Systems and methods for translating cache hints between different protocols within a SoC. A requesting agent within the SoC generates a first cache hint for a transaction, and the first cache hint is compliant with a first protocol. The first cache hint can be set to a reserved encoding value as defined by the first protocol. Prior to the transaction being sent to the memory subsystem, the first cache hint is translated into a second cache hint. The memory subsystem recognizes cache hints which are compliant with a second protocol, and the second cache hint is compliant with the second protocol. | 12-18-2014 |
20150012705 | REDUCING MEMORY TRAFFIC IN DRAM ECC MODE - A method for managing memory traffic includes causing first data to be written to a data cache memory, where a first write request comprises a partial write and writes the first data to a first portion of the data cache memory, and further includes tracking the number of partial writes in the data cache memory. The method further includes issuing a fill request for one or more partial writes in the data cache memory if the number of partial writes in the data cache memory is greater than a predetermined first threshold. | 01-08-2015 |
20150026403 | SELF-ADJUSTING CACHING SYSTEM - An apparatus having a cache and a controller is disclosed. The controller is configured to (i) gather a plurality of statistics corresponding to a plurality of requests made from one or more hosts to access a memory during an interval, (ii) store data of the requests selectively in the cache in response to a plurality of headers and (iii) adjust one or more parameters in the headers in response to the statistics. The requests and the parameters are recorded in the headers. | 01-22-2015 |
20150032959 | CACHE CONTROL ON HOST MACHINES - An approached is provided for monitoring data from a host machine running at least one virtual machine (VM); analyzing the monitored data from the host machine; conducting inferences from the analysis to determine a preferred size of a cache; and managing the cache size based upon the inferences for adapting the cache size on the host running the at least one VM. | 01-29-2015 |
20150032960 | ELECTRONIC DEVICES HAVING SEMICONDUCTOR MEMORY UNITS AND METHOD OF FABRICATING THE SAME - Electronic devices have a semiconductor memory unit including a magnetization compensation layer in a contact plug. One implementation of the semiconductor memory unit includes a variable resistance element having a stacked structure of a first magnetic layer, a tunnel barrier layer, and a second magnetic layer, and a contact plug arranged in at least one side of the variable resistance element and comprising a magnetization compensation layer. Another implementation includes a variable resistance element having a stacked structure of a first magnetic layer having a variable magnetization, a tunnel barrier layer, and a second magnetic layer having a pinned magnetization; and a contact plug arranged at one side of and separated from the variable resistance element to include a magnetization compensation layer that produces a magnetic field to reduce an influence of a magnetic field of the second magnetic layer on the first magnetic layer. | 01-29-2015 |
20150032961 | System and Methods of Data Migration Between Storage Devices - A method of migrating data that includes determining one or more objects to be migrated from a source device to a destination device; adding the one or more objects to a queue used to migrate the one or more objects to the destination device, the queue having a pre-defined size; suspending the adding the one or more objects to the queue if a total size of the objects in the queue exceeds the pre-defined size of the queue; resuming the adding the one or more objects to the queue when the total size of the objects in the data structure no longer exceeds the pre-defined size of the queue, and migrating each of the one or more objects in the queue to the destination device. | 01-29-2015 |
20150039831 | FILE LOAD TIMES WITH DYNAMIC STORAGE USAGE - Provided is a technique for improving file load times with dynamic storage usage. A file made up of data blocks is received. A list of storage devices is retrieved. In one or more iterations, the data blocks of the file are written by: updating the list of storage devices by removing any storage devices with insufficient space to store additional data blocks; generating a performance score for each of the storage devices in the updated list of storage devices; determining a portion of the data blocks to be written to each of the storage devices based on the generated performance score for each of the storage devices; writing, in parallel, the determined portion of the data blocks to each of the storage devices; and recording placement information indicating the storage devices to which each determined portion of the data blocks was written. | 02-05-2015 |
20150039832 | System and Method of Caching Hinted Data - The disclosure is directed to a system and method of cache management for a data storage system. According to various embodiments, the cache management system includes a hinting driver and a priority controller. The hinting driver generates pointers based upon data packets intercepted from data transfer requests being processed by a host controller of the data storage system. The priority controller determines whether the data packets are associated with at least a first (high) priority level or a second (normal or low) priority level based upon the pointers generated by the hinting driver. High priority data packets are stored in cache memory regardless of whether they satisfy a threshold heat quotient (i.e. a selected level of data transfer activity). | 02-05-2015 |
20150046648 | IMPLEMENTING DYNAMIC CACHE ENABLING AND DISABLING BASED UPON WORKLOAD - A method, system and memory controller for implementing dynamic enabling and disabling of cache based upon workload in a computer system. Predefined sets of information are monitored while the cache is enabled to identify a change in workload, and selectively disabling the cache responsive to a first identified predefined workload. Monitoring predefined information to identify a second predefined workload while the cache is disabled, and selectively enabling the cache responsive to said identified second predefined workload. | 02-12-2015 |
20150052302 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME - The disclosed technology provides an electronic device and a fabrication method thereof, in which an etching margin in formation of a variable resistance element is secured and process difficulty is reduced. An electronic device according to an implementation includes a semiconductor memory, the semiconductor memory including: a variable resistance element including a stack of a first magnetic layer, a tunnel barrier layer and a second magnetic layer; a contact plug coupling a top of the variable resistance element and including a magnetism correcting layer; and a conductive line coupled to the variable resistance element through the contact plug including the magnetism correcting layer. | 02-19-2015 |
20150058563 | MULTI-CORE FUSE DECOMPRESSION MECHANISM - An apparatus is contemplated for storing and decompressing configuration data in a multi-core microprocessor. The apparatus includes a shared fuse array and a plurality of microprocessor cores. The shared fuse array is disposed on a die and comprises a plurality of semiconductor fuses programmed with compressed configuration data. The plurality of microprocessor cores is also disposed on the die, where each of the plurality of microprocessor cores is coupled to the shared fuse array and is configured to access all of the compressed configuration data during power-up/reset, for initialization of elements within the each of the plurality of cores. The each of the plurality of cores have a reset controller that is configured to decompress the all of the compressed configuration data, and to distribute decompressed configuration data to initialize the elements. | 02-26-2015 |
20150058564 | APPARATUS AND METHOD FOR EXTENDED CACHE CORRECTION - An apparatus includes a semiconductor fuse array, a cache memory, and a plurality of cores. The semiconductor fuse array is disposed on a die, into which is programmed the configuration data. The semiconductor fuse array has a first plurality of semiconductor fuses that is configured to store compressed cache correction data. The a cache memory is disposed on the die. The plurality of cores is disposed on the die, where each of the plurality of cores is coupled to the semiconductor fuse array and the cache memory, and is configured to access the semiconductor fuse array upon power-up/reset, to decompress the compressed cache correction data, and to distribute decompressed cached correction data to initialize the cache memory. | 02-26-2015 |
20150058565 | APPARATUS AND METHOD FOR COMPRESSION OF CONFIGURATION DATA - An apparatus includes a device programmer, coupled to a plurality of semiconductor fuses disposed on a die, configured to program the plurality of semiconductor fuses with compressed configuration data for a plurality of cores disposed separately on the die. The device programmer has a virtual fuse array and a compressor. The virtual fuse array is configured to store the configuration data for the plurality of cores. The configuration data includes a plurality of data types. The compressor is coupled to the virtual fuse array and is configured to read the virtual fuse array, and is configured to compress the configuration data by employing a plurality of compression algorithms to generate the compressed configuration data, where the plurality of compression algorithms correspond to the plurality of data types. | 02-26-2015 |
20150058566 | SEMICONDUCTOR MEMORY APPARATUS - A semiconductor memory apparatus includes a column address decoding unit configured to decode a column address and generate a column select signal; a row address decoding unit configured to decode a row address and generate a word line select signal; a driving driver unit configured to provide different voltages to a plurality of resistive memory elements in response to the column select signal; a sink current control unit configured to generate a plurality of sink voltages with different voltage levels in response to the word line select signal; and a plurality of current sink units configured to flow current from the plurality of respective resistive memory elements to a ground terminal in response to the plurality of sink voltages. | 02-26-2015 |
20150074350 | MEMOIZATION BUCKETS FOR CACHED FUNCTION RESULTS - A memoization system and method arranges cached function results into groups, or buckets, to identify related cache values to invalidate upon obsolescence (staleness) of any one of the cached values in the group. A wrapper function in coded invocations to the cached functions identifies a group to which the function result belongs. Values in a cache group are denoted as a bucket, and subsequent functions that render the cached values obsolete are also invoked via a wrapper function indicating the bucket. The invalidate wrapper results in invalidation of all of the obsolete values in the bucket such that subsequent invocations will not attempt to employ the outdated values. | 03-12-2015 |
20150074351 | WRITE-BEHIND CACHING IN DISTRIBUTED FILE SYSTEMS - Systems and methods for write-behind caching in distributed file systems. An example method may comprise: receiving, over a network, a direct write request referencing data to be written to a file residing on a persistent data storage device, the file containing at least part of an image of a virtual machine disk; writing, by a processing device, the data to a cache entry of a memory-resident cache, the cache entry corresponding to at least a part of the file; acknowledging the write request as completed; and committing, asynchronously with respect to the acknowledging, the cache entry to the persistent data storage device. | 03-12-2015 |
20150089137 | Managing Mirror Copies without Blocking Application I/O - Mechanisms, in a data processing system comprising a processor and an address translation cache, for caching address translations in the address translation cache are provided. The mechanisms receive an address translation from a server computing device to be cached in the data processing system. The mechanisms generate a cache key based on a current valid number of mirror copies of data maintained by the server computing device. The mechanisms allocate a buffer of the address translation cache, corresponding to the cache key, for storing the address translation and store the address translation in the allocated buffer. Furthermore, the mechanisms perform an input/output operation using the address translation stored in the allocated buffer. | 03-26-2015 |
20150095574 | COMPUTING SYSTEM INCLUDING STORAGE SYSTEM AND METHOD OF WRITING DATA THEREOF - Provided is a method of writing data of a storage system. The method includes causing a host to issue a first writing command; causing the host, when a queue depth of the first writing command is a first value, to store the first writing command in an entry which is assigned in advance and is included in a cache; causing the host to generate a writing completion signal for the first writing command; and causing the host to issue a second writing command. | 04-02-2015 |
20150095575 | CACHE MIGRATION MANAGEMENT METHOD AND HOST SYSTEM APPLYING THE METHOD - Provided are a cache migration management method and a host system configured to perform the cache migration management method. The cache migration management method includes: moving, in response to a request for cache migration with respect to first data stored in a main storage device, the first data and second data related to the first data from the main storage device to a cache storage device; and adding information about the first data moved to the cache storage device and the second data moved to the cache storage device, the moving of the first data and the second data to the cache storage device including storing the first data moved to the cache storage device and the second data moved to the cache storage device at continuous physical addresses of the cache storage device in an order in which the first data and the second data are to be loaded to a host device. | 04-02-2015 |
20150100730 | Freeing Memory Safely with Low Performance Overhead in a Concurrent Environment - Freeing memory safely with low performance overhead in a concurrent environment is described. An example method includes creating a reference count for each sub block in a global memory block, and each global memory block includes a plurality of sub blocks aged based on respective allocation time. A reference count for a first sub block is incremented when a thread operates a collection of data items and accesses the first sub block for a first time. Reference counts for the first sub block and a second sub block are lazily updated. Subsequently, the sub blocks are scanned through in the order of their age until a sub block with a non-zero reference count is encountered. Accordingly, one or more sub blocks whose corresponding reference counts are equal to zero are freed safely and with low performance overhead. | 04-09-2015 |
20150106566 | Computer Processor Employing Dedicated Hardware Mechanism Controlling The Initialization And Invalidation Of Cache Lines - A computer processing system includes execution logic that generates memory requests that are supplied to a hierarchical memory system. The computer processing system includes a hardware map storing a number of entries associated with corresponding cache lines, where each given entry of the hardware map indicates whether a corresponding cache line i) currently stores valid data in the hierarchical memory system, or ii) does not currently store valid data in hierarchical memory system and should be interpreted as being implicitly zero throughout. | 04-16-2015 |
20150113219 | SYSTEM AND METHOD FOR CACHING MULTIMEDIA DATA - Systems and methods are provided for caching media data to thereby enhance media data read and/or write functionality and performance. A multimedia apparatus, comprises a cache buffer configured to be coupled to a storage device, wherein the cache buffer stores multimedia data, including video and audio data, read from the storage device. A cache manager coupled to the cache buffer, wherein the cache buffer is configured to cause the storage device to enter into a reduced power consumption mode when the amount of data stored in the cache buffer reaches a first level. | 04-23-2015 |
20150121006 | SPLIT WRITE OPERATION FOR RESISTIVE MEMORY CACHE - A method of reading from and writing to a resistive memory cache includes receiving a write command and dividing the write command into multiple write sub-commands. The method also includes receiving a read command and executing the read command before executing a next write sub-command. | 04-30-2015 |
20150121007 | ADJUSTMENT OF THE NUMBER OF TASK CONTROL BLOCKS ALLOCATED FOR DISCARD SCANS - A controller receives a request to perform a release space operation. A determination is made that a new discard scan has to be performed on a cache, in response to the received request to perform the release space operation. A determination is made as to how many task control blocks are to be allocated to the perform the new discard scan, based on how many task control blocks have already been allocated for performing one or more discard scans that are already in progress. | 04-30-2015 |
20150127905 | Cache Modeling Using Random Sampling and a Timestamp Histogram - A system and method for determining an optimal cache size of a computing system is provided. In some embodiments, the method comprises selecting a portion of an address space of a memory structure of the computing system. A workload of data transactions is monitored to identify a transaction of the workload directed to the portion of the address space. An effect of the transaction on a cache of the computing system is determined, and, based on the determined effect of the transaction, an optimal cache size satisfying a performance target is determined. In one such embodiment the determining of the effect of the transaction on a cache of the computing system includes determining whether the effect would include a cache hit for a first cache size and determining whether the effect would include a cache hit for a second cache size different from the first cache size. | 05-07-2015 |
20150134909 | MANAGING READ OPERATIONS, WRITE OPERATIONS AND EXTENT CHANGE OPERATIONS - A method for responding to an extent change operation, the method may include receiving, by a storage system and from a requesting entity, a request to perform an extent content change operation that involves changing a content of a certain extent within a logical space supported by a storage system; generating, in response to the request, extent change operation information that comprises (a) an event counter indicative of a time of requested occurrence of the extent change operation, (ii) a type of extent change operation indicator, and (ii) logical addresses associated with the extent change operation; and sending to the requesting entity an acknowledgement indicative of a completion of the extent change operation before a completion of the extent change operation if an expected content of the certain extent is known before completion of the extent change operation. | 05-14-2015 |
20150134910 | DATA CACHE SYSTEM, RECORDING MEDIUM AND METHOD - A transmission device ( | 05-14-2015 |
20150149720 | CONTROL METHOD, CONTROL DEVICE, AND RECORDING MEDIUM - A non-transitory computer-readable recording medium has stored therein a program that causes a computer to execute a control process. The control process includes: receiving an access request for a recording device that stores data; determining whether or not index information corresponding to the access request, which is received at the receiving, is stored in a memory that stores index information that is obtained by shortening identification information identifying data from the recording device cached in a non-volatile memory; and accessing the non-volatile memory when it is determined that the index information is stored in the memory and accessing the recording device when it is determined that the index information is not in the memory. | 05-28-2015 |
20150293849 | COUNTER-BASED WIDE FETCH MANAGEMENT - Embodiments relate to counter-based wide fetch management. An aspect includes assigning a counter to a first memory region in a main memory that is allocated to a first application that is executed by a processor of a computer. Another aspect includes maintaining, by the counter, a count of a number of times adjacent cache lines in the cache memory that correspond to the first memory region are touched by the processor. Another aspect includes determining an update to a data fetch width indicator corresponding to the first memory region based on the counter. Another aspect includes sending a hardware notification from a counter management module to supervisory software of the computer of the update to the data fetch width indicator. Yet another aspect includes updating, by the supervisory software, the data fetch width indicator of the first memory region in the main memory based on the hardware notification. | 10-15-2015 |
20150293851 | MEMORY-AREA PROPERTY STORAGE INCLUDING DATA FETCH WIDTH INDICATOR - Embodiments relate to memory-area property storage including a data fetch width indicator. An aspect includes allocating a memory page in a main memory to an application that is executed by a processor of a computer. Another aspect includes determining the data fetch width indicator for the allocated memory page. Another aspect includes setting the data fetch width indicator in the at least one memory-area property storage in the allocated memory page. Another aspect includes, based on a cache miss in the cache memory corresponding to an address that is located in the allocated memory page: determining the data fetch width indicator in the memory-area property storage associated with the location of the address; and fetching an amount of data from the memory page based on the data fetch width indicator. | 10-15-2015 |
20150293852 | COUNTER-BASED WIDE FETCH MANAGEMENT - Embodiments relate to counter-based wide fetch management. An aspect includes assigning a counter to a first memory region in a main memory that is allocated to a first application that is executed by a processor of a computer. Another aspect includes maintaining, by the counter, a count of a number of times adjacent cache lines in the cache memory that correspond to the first memory region are touched by the processor. Another aspect includes determining an update to a data fetch width indicator corresponding to the first memory region based on the counter. Another aspect includes sending a hardware notification from a counter management module to supervisory software of the computer of the update to the data fetch width indicator. Yet another aspect includes updating, by the supervisory software, the data fetch width indicator of the first memory region in the main memory based on the hardware notification. | 10-15-2015 |
20150293855 | PAGE TABLE INCLUDING DATA FETCH WIDTH INDICATOR - Embodiments relate to a page table including a data fetch width indicator. An aspect includes allocating a memory page in a main memory to an application. Another aspect includes creating a page table entry corresponding to the memory page in the page table. Another aspect includes determining, by a data fetch width indicator determination logic, the data fetch width indicator for the memory page. Another aspect includes sending a notification of the data fetch width indicator from the data fetch width indicator determination logic to supervisory software. Another aspect includes setting the data fetch width indicator in the page table entry by the supervisory software based on the notification. Another aspect includes, based on a cache miss in the cache memory corresponding to an address that is located in the memory page, fetching an amount of data from the memory page based on the data fetch width indicator. | 10-15-2015 |
20150294702 | ELECTRONIC DEVICE - Provided are, among others, memory circuits or devices and their applications in electronic devices or systems and various implementations of an electronic device which includes two variable resistance elements in each storage cell, thereby increasing margin and speed of a read operation. One disclosed electronic device includes a semiconductor memory unit which, in one implementation, in addition to two variable resistance elements, further includes a bit line and a bit line bar formed at a metal level; a first word line formed at a transistor level lower than the metal level, and extended in a direction perpendicular to the bit line or the bit line bar; a first selecting element formed at the transistor level and coupled to the bit line and the first word line; a second selecting element formed at the transistor level and coupled to the bit line bar and the first word line. | 10-15-2015 |
20150309930 | Dynamic Power Reduction and Performance Improvement in Caches Using Fast Access - With the increasing demand for improved processor performance, memory systems have been growing increasingly larger to keep up with this performance demand. Caches, which dictate the performance of memory systems are often the focus of improved performance in memory systems, and the most common techniques used to increase cache performance are increased size and associativity. Unfortunately, these methods yield increased static and dynamic power consumption. In this invention, a technique is shown that reduces the power consumption in associative caches with some improvement in cache performance. The architecture shown achieves these power savings by reducing the number of ways queried on each cache access, using a simple hash function and no additional storage, while skipping some pipe stages for improved performance. Up to 90% reduction in power consumption with a 4.6% performance improvement was observed. | 10-29-2015 |
20150317248 | SIZING A WRITE CACHE BUFFER BASED ON EMERGENCY DATA SAVE PARAMETERS - Embodiments relate to saving data upon loss of power. An aspect includes sizing a write cache buffer based on parameters related to carrying out this emergency data save procedure. A computer implemented method for allocating a write cache on a storage controller includes retrieving, at run-time by a processor, one or more operating parameters of a component used in a power-loss save of the write cache. The component is selected from the group consisting of an energy storage element, a non-volatile memory, and a transfer logic. A size for the write cache on the storage controller is determined, based on the one or more operating parameters. A write cache, of the determined size, is allocated from a volatile memory coupled to the storage controller. | 11-05-2015 |
20150317252 | DYNAMIC CACHABLE MEMORY INTERFACE FREQUENCY SCALING - A method and apparatus for controlling a frequency of CMI are disclosed. The method may include classifying request types into one or more request groups, wherein each of the request types is a type of CMI request. A number of clock cycles that is sufficient to process a request in each request group may be assigned, and requests that are made to CMI may be monitored with one or more performance counters. A number of requests that occur during a length of time in each request group may be determined, and a frequency of the CMI may be periodically adjusted based upon the number of requests occurring per second in each request group and the assigned number of clock cycles per request for each request group. | 11-05-2015 |
20150318042 | CAM CELL FOR OVERWRITING COMPARISON DATA DURING MASK OPERATION - A comparison function-equipped memory element includes: a memory circuit that stores comparison object data; a comparison circuit that compares the comparison object data held in the memory circuit with comparison data and outputs the comparison result; and a write circuit that writes the comparison object data into the memory circuit under control of a write control signal during write operation, and overwrites the comparison data into the memory circuit when a mask control signal indicates mask operation during comparison operation. | 11-05-2015 |
20150324287 | A METHOD AND APPARATUS FOR USING A CPU CACHE MEMORY FOR NON-CPU RELATED TASKS - There is provided a processor for use in a computing system, said processor including at least one Central Processing Unit (CPU), a cache memory coupled to the at least one CPU, and a control unit coupled to the cache memory and arranged to obscure the existing data in the CPU cache memory, and assign control of the CPU cache memory to at least one other entity within the computing system. There is also provided a method of using a CPU cache memory for non-CPU related tasks in a computing system. | 11-12-2015 |
20150324294 | STORAGE SYSTEM AND CACHE CONTROL METHOD - A cache memory comprises a cache controller and a nonvolatile semiconductor memory as a storage medium. The nonvolatile semiconductor memory comprises multiple blocks, which are data erase units, and each block comprises multiple pages, which are data write and read units. The cache controller receives data and attribute information of the data, and, based on the received attribute information and attribute information of the data stored in the multiple blocks, selects a storage-destination block for storing the received data, and writes the received data to a page inside the selected storage-destination block. | 11-12-2015 |
20150331807 | THIN PROVISIONING ARCHITECTURE FOR HIGH SEEK-TIME DEVICES - A compute server accomplishes physical address to virtual address translation to optimize physical storage capacity via thin provisioning techniques. The thin provisioning techniques can minimize disk seeks during command functions by utilizing a translation table and free list stored to both one or more physical storage devices as well as to a cache. The cached translation table and free list can be updated directly in response to disk write procedures. A read-only copy of the cached translation table and free list can be created and stored to physical storage device for use in building the cached translation table and free list upon a boot of the compute server. The copy may also be used to repair the cached translation table in the event of a power failure or other event affecting the cache. | 11-19-2015 |
20150339062 | ARITHMETIC PROCESSING DEVICE, INFORMATION PROCESSING DEVICE, AND CONTROL METHOD OF ARITHMETIC PROCESSING DEVICE - An arithmetic processing device which connects to a main memory, the arithmetic processor includes a cache memory which stores data, an arithmetic unit which performs an arithmetic operation for data stored in the cache memory, a first control device which controls the cache memory and outputs a first request which reads the data stored in the main memory, and a second control device which is connected to the main memory and transmits a plurality of second requests which are divided the first request output from the first control device, receives data corresponding to the plurality of second requests which is transmitted from the main memory and sends each of the data to the first control device. | 11-26-2015 |
20150339221 | TECHNIQUES FOR EFFICIENT MASS STORAGE LAYOUT OPTIMIZATION - A data storage system can automatically improve the layout of data blocks on a mass storage subsystem by collecting optimization information during both read and write activities, then processing the optimization information to limit the impact of optimization activities on the system's response to client requests. Processing read-path optimization information and write-path optimization information through shared rate-limiting logic simplifies system administration and promotes phased implementation, which can reduce the difficulty of developing a self-optimizing storage server. | 11-26-2015 |
20150347030 | Using History of Unaligned Writes to Cache Data and Avoid Read-Modify-Writes in a Non-Volatile Storage Device - Systems, methods and/or devices are used to enable using history of unaligned writes to cache data and avoid read-modify-writes in a non-volatile storage device. In one aspect, the method includes (1) receiving a plurality of input/output (I/O) requests including read requests and write requests to be performed in a plurality of regions in a logical address space of a host, and (2) performing one or more operations for each region of the plurality of regions in the logical address space of the host, including (a) determining whether the region has a history of unaligned write requests during a predetermined time period, and (b) if so: (i) determining one or more sub-regions within the region that are accessed more than a predetermined threshold number of times during the predetermined time period, and (ii) caching data from the determined one or more sub-regions. | 12-03-2015 |
20150347311 | STORAGE HIERARCHICAL MANAGEMENT SYSTEM - Since an application or an administrator of the application copies data having a high frequency of reference from a storage to an upper tier within a server or the storage, when I/O concentrates to data other than the data copied to the server by the application, the performance of the storage is deteriorated. Therefore, according to the present invention, a management computer performs tier management of data stored in any of multiple types of storage devices disposed in the storage system according to an access status thereof, and in conjunction with the application, also sets data stored in the multiple types of storage device other than the data accessed by the processing of the application as a target of tier management. | 12-03-2015 |
20150356011 | ELECTRONIC DEVICE AND DATA WRITING METHOD - An electronic device includes a first storage unit, a second storage unit and a control unit. The first storage unit stores the cache of the data. The second storage unit stores the data. The control unit calculates a first ratio of the cache corresponding to the data according to the capacity of the first storage unit. The control unit sends a distribution signal to the processing unit when the control unit reads the data from the second storage unit. The processing unit obtains a first distribution result corresponding to the cache according to the first ratio, and stores the cache to the first storage unit according to the first distribution result. | 12-10-2015 |
20150363134 | STORAGE APPARATUS AND DATA MANAGEMENT - Access performance of a storage apparatus to which a deduplication technique is applied is enhanced. | 12-17-2015 |
20150370496 | Hardware-Enforced Prevention of Buffer Overflow - An apparatus having processing circuitry configured to execute applications involving access to memory may include a CPU and a cache controller. The CPU may be configured to access cache memory for execution of an application. The cache controller may be configured to provide an interface between the CPU and the cache memory. The cache controller may include a bitmask to enable the cache controller to employ a two-level data structure to identify memory exploits using hardware. The two-level data structure may include a page level protection mechanism, and a sub-page level protection mechanism. | 12-24-2015 |
20150378892 | COMPUTER SYSTEM AND MEMORY ALLOCATION ADJUSTMENT METHOD FOR COMPUTER SYSTEM - In a computer system in which a virtualization control unit controls a plurality of virtual machines, if memory is collected regardless of a memory usage status of the virtual machine, cache miss increases and an IO performance of overall system deteriorates. In order to solve this problem, a usage status of a cache region within a memory which is utilized by each OS that the plurality of the virtual machines has, and based on a monitoring result, the virtualization control unit decides an allocation region of the memory as a collection target among the allocation region of the memory already allocated to each OS, and collects the allocation region of the memory as the collection target from the OS as a current allocation destination. | 12-31-2015 |
20150378893 | MANAGING CACHE POOLS - Apparatuses, systems, and methods are disclosed for managing cache pools. A storage request module monitors storage requests received by a cache. The storage requests include read requests and write requests. A read pool module adjusts a size of a read pool of the cache to increase a read hit rate of the storage requests. A dirty write pool module adjusts a size of a dirty write pool of the cache to increase a dirty write hit rate of the storage requests. | 12-31-2015 |
20150378920 | GRAPHICS DATA PRE-FETCHER FOR LAST LEVEL CACHES - In one embodiment, an improved graphics data cache prefetcher includes a cache prefetch unit and a prefetch determination unit (PDU). The PDU determines if there is space available to retrieve some or all of the resources for an upcoming graphics operation while a current graphics operation is processed and whether the retrieval can be performed without impacting the performance of the current operation. If there is space available to retrieve some or all of the upcoming operation's resources into one or more GPU caches the prefetch determination unit programs the cache prefetch unit to retrieve the data into the one or more caches before performing the upcoming operation. | 12-31-2015 |
20160004636 | ELECTRONIC DEVICE WITH CACHE MEMORY AND METHOD OF OPERATING THE SAME - An electronic device with a cache memory and a method of operating the electronic device are provided. The electronic device includes a cache memory including a plurality of cache lines each of which includes a first area with at least one storage space and a second area with at least one storage space, where the at least one storage space of the first area has a first size and the at least one storage space of the second area has a second size different from the first size, and a cache controller for storing the data requested for storage in one of the storage spaces of the first or second area, according to a compression factor associated with the data requested for storage when a request is made to store data in the cache memory. | 01-07-2016 |
20160013405 | ELECTRONIC DEVICE INCLUDING A SEMICONDUCTOR MEMORY AND METHOD FOR FABRICATING THE SAME | 01-14-2016 |
20160026537 | STORAGE SYSTEM - A storage system | 01-28-2016 |
20160026577 | TECHNIQUE TO IMPROVE PERFORMANCE OF MEMORY COPIES AND STORES - A system and method for efficiently relocating and initializing a block of memory of the computer system. For data initialization and data relocation, multiple registers in a processor are used for intermediate storage of data to be written into the memory. Regardless of whether the amount of data to initialize or relocate is aligned with the register data size, the processor writes the data into the destination buffer with write operations that only utilize the register data size. The write operations utilize the register data size when each of the start and the end of the destination buffer is aligned with the register width, when the start of the destination buffer is unaligned with the register width, when a source buffer and the destination buffer are unaligned with one another for a copy operation, and when the source buffer and the destination buffer overlap. | 01-28-2016 |
20160034391 | MANAGING A COLLECTION OF DATA - A measurement sampling facility takes snapshots of the central processing unit (CPU) on which it is executing at specified sampling intervals to collect data relating to tasks executing on the CPU. The collected data is stored in a buffer, and at selected times, an interrupt is provided to remove data from the buffer to enable reuse thereof. The interrupt is not taken after each sample, but in sufficient time to remove the data and minimize data loss. | 02-04-2016 |
20160055082 | MEMORY ALLOCATING METHOD AND ELECTRONIC DEVICE SUPPORTING THE SAME - An electronic device, including an application configured to request a page allocation of a process, a cache management module configured to allocate a page in a page group including uninterrupted (or consecutive, or contiguous) pages to the process, and a page buffer configured to manage the at least one page group, is disclosed. | 02-25-2016 |
20160055093 | Supplemental Write Cache Command For Bandwidth Compression - Aspects include computing devices, systems, and methods for implementing a cache memory access requests for data smaller than a cache line and eliminating overfetching from a main memory by writing supplemental data to the unfilled portions of the cache line. A cache memory controller may receive a cache memory access request with a supplemental write command for data smaller than a cache line. The cache memory controller may write supplemental to the portions of the cache line not filled by the data in response to a write cache memory access request or a cache miss during a read cache memory access request. In the event of a cache miss, the cache memory controller may retrieve the data from the main memory, excluding any overfetch data, and write the data and the supplemental data to the cache line. Eliminating overfetching reduces bandwidth and power required to retrieved data from main memory. | 02-25-2016 |
20160055094 | Power Aware Padding - Aspects include computing devices, systems, and methods for implementing a cache memory access requests for data smaller than a cache line and eliminating overfetching from a main memory by combining the data with padding data of a size of a difference between a size of a cache line and the data. A processor may determine whether the data, uncompressed or compressed, is smaller than a cache line using a size of the data or a compression ratio of the data. The processor may generate the padding data using constant data values or a pattern of data values. The processor may send a write cache memory access request for the combined data to a cache memory controller, which may write the combined data to a cache memory. The cache memory controller may send a write memory access request to a memory controller, which may write the combined data to a memory. | 02-25-2016 |
20160055095 | STORING DATA FROM CACHE LINES TO MAIN MEMORY BASED ON MEMORY ADDRESSES - A method for performing memory operations is provided. One or more processors can determine that at least a portion of data stored in a cache memory of the one or more processors is to be stored in the main memory. One or more ranges of addresses of the main memory is determined that correspond to a plurality of cache lines in the cache memory. A set of cache lines corresponding to addresses in the one or more ranges of addresses is identified, so that data stored in the identified set can be stored in the main memory. For each cache line of the identified set having data that has been modified since that cache line was first loaded to the cache memory or since a previous store operation, data stored in that cache line is caused to be stored in the main memory. | 02-25-2016 |
20160062897 | STORAGE CACHING - The present disclosure provides a method for processing a storage operation in a system with an added level of storage caching. The method includes receiving, in a storage cache, a read request from a host processor that identifies requested data and determining whether the requested data is in a cache memory of the storage cache. If the requested data is in the cache memory of the storage cache, the requested data may be obtained from the storage cache and sent to the host processor. If the requested data is not in the cache memory of the storage cache, the read request may be sent to a host bus adapter operatively coupled to a storage system. The storage cache is transparent to the host processor and the host bus adapter. | 03-03-2016 |
20160070504 | APPARATUSES AND METHODS FOR A MEMORY DIE ARCHITECTURE INCLUDING AN INTERFACE MEMORY - Apparatuses and methods for reducing capacitance on a data bus are disclosed herein. In accordance with one or more described embodiments, an apparatus may comprise a plurality of memories coupled to an internal data bus and a command and address bus, each of the memories configured to receive a command on the command and address bus. One of the plurality of memories may be coupled to an external data bus, The one of the plurality of memories may be configured to provide program data to the internal data bus when the command comprises a program command and another of the plurality of memories is a target memory of the program command and may be configured to provide read data to the external data bus when the command comprises a read command and the another of the plurality of memories is a target memory of the read command. | 03-10-2016 |
20160070508 | MEMORY SYSTEM DATA MANAGEMENT - The present disclosure includes apparatuses and methods for memory system data management. A number of embodiments include writing data from a host to a buffer in the memory system, receiving, at the buffer, a notification from a memory device in the memory system that the memory device is ready to receive data, sending at least a portion of the data from the buffer to the memory device, and writing the portion of the data to the memory device. | 03-10-2016 |
20160070648 | DATA STORAGE SYSTEM AND OPERATION METHOD THEREOF - A data storage system and an operation method thereof are provided. The data storage system includes a data storage module, a cache module and a data accessing module. The data accessing module is coupled to the data storage module and the cache module. When the data accessing module receives a data-write command from a host, the data accessing module is configured to reply a writing-completed command to the host after storing writing-data of the data-write command into the cache module, arrange the data-write command into a data-writing schedule table, and write the writing-data read from the cache module into the data storage module according to the data-writing schedule table. | 03-10-2016 |
20160077968 | SYSTEM AND METHOD FOR CONFIGURING AND CONTROLLING NON-VOLATILE CACHE - Systems and methods for configuring, controlling and operating a non-volatile cache are disclosed. A host system may poll a memory system as to the memory system's configuration of its non-volatile cache. Further, the host system may configure the non-volatile cache on the memory system, such as the size of the non-volatile cache and the type of programming for the non-volatile cache (e.g., whether the non-volatile cache is programmed according to SLC or the type of TRIM used to program cells in the non-volatile cache). Moreover, responsive to a command from the host to size the non-volatile cache, the memory system may over or under provision the cache. Further, the host may control operation of the non-volatile cache, such as by sending selective flush commands. | 03-17-2016 |
20160092123 | MEMORY WRITE MANAGEMENT IN A COMPUTER SYSTEM - In accordance with the present description, an apparatus for use with a source issuing write operations to a target, wherein the device includes an I/O port, and logic of the target configured to detect a flag issued by the source in association with the issuance of a first plurality of write operations. In response to detection of the flag, the logic of the target ensures that the first plurality of write operations are completed in a memory prior to completion of any of the write operations of the second plurality of write operations. Other aspects are described herein. | 03-31-2016 |
20160092134 | SCALABLE, MULTI-DIMENSIONAL SEARCH FOR OPTIMAL CONFIGURATION - According to an embodiment, storage configurations are identified for storing items, such as database tables, partitions, or any other types of objects or data structures, within a desired storage area, such as an in-memory data store or any other limited storage resource. Each of the storage configurations is assigned to a particular item of the items. Each of the storage configurations associates the assigned particular item with one or more storage configuration options. Storage recommendations are generated for at least a set of the storage configurations. A different storage recommendation exists for each storage configuration in the set of the storage configurations. The storage recommendation associates the storage configuration with a range of possible storage sizes for a particular storage area of a system. Based on the storage recommendations, recommended system configurations a generated for different possible storage sizes of the particular storage area. | 03-31-2016 |
20160092365 | SYSTEM AND METHOD FOR COMPACTING PSEUDO LINEAR BYTE ARRAY - In accordance with an embodiment, described herein is a system and method for compacting a pseudo linear byte array, for use with supporting access to a database. A database driver (e.g., a Java Database Connectivity (JDBC) driver) provides access by software application clients to a database. When a result set (e.g., ResultSet) is returned for storage in a dynamic byte array (DBA), in response to a database query (e.g., a SELECT), the database driver determines if the DBA is underfilled and, if so, calculates the data size of the DBA, creates a static byte array (SBA) in a cache at the client, compacts the returned data into the SBA, and stores the data size as part of the metadata associated with the cache. In accordance with an embodiment, the DBA and the SBA can use a same interface for access by client applications. | 03-31-2016 |
20160098195 | METHOD AND APPARATUS FOR DETERMINING A TIMING ADJUSTMENT OF OUTPUT TO A HOST MEMORY CONTROLLER - Provided are a method and apparatus for determining a timing adjustment of output to a host memory controller in a first memory module coupled to a host memory controller and a second memory module over a bus. A determination is made of a timing adjustment based on at least one component in at least one of the first memory module and the second memory module. A timing of output to the host memory controller is adjusted based on the determined timing adjustment to match a timing of output at the second memory module. | 04-07-2016 |
20160098205 | FILE LOAD TIMES WITH DYNAMIC STORAGE USAGE - Provided is a technique for improving file load times with dynamic storage usage. A file made up of data blocks is received. A list of storage devices is retrieved. In one or more iterations, the data blocks of the file are written by: updating the list of storage devices by removing any storage devices with insufficient space to store additional data blocks; generating a performance score for each of the storage devices in the updated list of storage devices; determining a portion of the data blocks to be written to each of the storage devices based on the generated performance score for each of the storage devices; writing, in parallel, the determined portion of the data blocks to each of the storage devices; and recording placement information indicating the storage devices to which each determined portion of the data blocks was written. | 04-07-2016 |
20160103612 | Approximation of Execution Events Using Memory Hierarchy Monitoring - Aspects include computing devices, systems, and methods for implementing monitoring communications between components and a memory hierarchy of a computing device. The computing device may determine at least one identifying factor for identifying execution of the processor-executable code. A communication between the components and the memory hierarchy of the computing device may be monitored for at least one communication factor of a same type as the at least one identifying factor. A determination whether a value of the at least one identifying factor matches a value of the at least one communication factor may be made. The computing device may determine that the processor-executable code is executed in response to determining that the value of the at least one identifying factor matches the value of the at least one communication factor. | 04-14-2016 |
20160103616 | CLUSTER FAMILIES FOR CLUSTER SELECTION AND COOPERATIVE REPLICATION - Cluster families for cluster selection and cooperative replication are created. The clusters are grouped into family members of a cluster family base on their relationships and roles. Members of the cluster family determine which family member is in the best position to obtain replicated information and become cumulatively consistent within their cluster family. Once the cluster family becomes cumulatively consistent, the data is shared within the cluster family so that all copies within the cluster family are consistent. | 04-14-2016 |
20160103767 | METHODS AND SYSTEMS FOR DYNAMIC HASHING IN CACHING SUB-SYSTEMS - Methods and systems for dynamic hashing in cache sub-systems are provided. The method includes analyzing a plurality of input/output (I/O) requests for determining a pattern indicating if the I/O requests are random or sequential; and using the pattern for dynamically changing a first input to a second input for computing a hash index value by a hashing function that is used to index into a hashing data structure to look up a cache block to cache an I/O request to read or write data, where for random I/O requests, a segment size is the first input to a hashing function to compute a first hash index value and for sequential I/O requests, a stripe size is used as the second input for computing a second hash index value. | 04-14-2016 |
20160116969 | Memory Power Savings in Idle Display Case - In an embodiment, a system includes a memory controller that includes a memory cache and a display controller configured to control a display. The system may be configured to detect that the images being displayed are essentially static, and may be configured to cause the display controller to request allocation in the memory cache for source frame buffer data. In some embodiments, the system may also alter power management configuration in the memory cache to prevent the memory cache from shutting down or reducing its effective size during the idle screen case, so that the frame buffer data may remain cached. During times that the display is dynamically changing, the frame buffer data may not be cached in the memory cache and the power management configuration may permit the shutting down/size reduction in the memory cache. | 04-28-2016 |
20160117252 | Processing of Un-Map Commands to Enhance Performance and Endurance of a Storage Device - A storage device and method enable processing of un-map commands. In one aspect, the method includes (1) determining whether a size of an un-map command satisfies (e.g., is greater than or equal to) a size threshold, (2) if the size of the un-map command satisfies the size threshold, performing one or more operations of a first un-map process, wherein the first un-map process forgoes (does not include) saving a mapping table to non-volatile memory of a storage device, and (3) if the size of the un-map command does not satisfy the size threshold, performing one or more operations of a second un-map process, wherein the second un-map process forgoes (does not include) saving the mapping table to non-volatile memory of the storage device and forgoes (does not include) flushing a write cache to non-volatile memory of the storage device. | 04-28-2016 |
20160124853 | DIAGNOSTIC APPARATUS, CONTROL UNIT, INTEGRATED CIRCUIT, VEHICLE AND METHOD OF RECORDING DIAGNOSTIC DATA - A diagnostic apparatus comprises a diagnostic data buffer constituting a volatile memory, and a non-volatile memory capable of receiving data from the buffer. A data buffer controller is also provided and is operably coupled to the buffer and has an event alert input and a data channel monitoring input for receiving diagnostic data. The buffer receives, when the state of a buffer status memory indicates that the buffer is in an unprotected state, at least part of the diagnostic data received by the controller via the data channel monitoring input to the buffer and the controller sets the state of the buffer status memory to indicate the protected state in response to receipt of an event alert received via the event alert input. A controller monitors the buffer status memory and copies a portion of the buffer to the non-volatile memory in response to the buffer status memory being set to be indicative of the protected state. | 05-05-2016 |
20160132433 | COMPUTER SYSTEM AND CONTROL METHOD - A computer system according to the present invention is composed of a server | 05-12-2016 |
20160139831 | DYNAMIC RELOCATION OF STORAGE - A computing device is provided and includes a first physical memory device, a second physical memory device and a hypervisor configured to assign resources of the first and second physical memory devices to a logical partition. The hypervisor configures a dynamic memory relocation (DMR) mechanism to move entire storage increments currently processed by the logical partition between the first and second physical memory devices in a manner that is substantially transparent to the logical partition. | 05-19-2016 |
20160147456 | MEMORY-ACCESS-RESOURCE MANAGEMENT - The present application is directed to a memory-access-multiplexing memory controller that can multiplex memory accesses from multiple hardware threads, cores, and processors according to externally specified policies or parameters, including policies or parameters set by management layers within a virtualized computer system. A memory-access-multiplexing memory controller provides, at the physical-hardware level, a basis for ensuring rational and policy-driven sharing of the memory-access resource among multiple hardware threads, cores, and/or processors. | 05-26-2016 |
20160147649 | INPUT/OUTPUT TRACE SAMPLING - Exemplary methods, apparatuses, and systems include a host computer selecting a first workload of a plurality of workloads running on the host computer to be subjected to an input/output (I/O) trace. The host computer determines whether to generate the I/O trace for the first workload for a first length of time or for a second length of time. The first length of time is shorter than the second length of time. The determination is based upon runtime history for the first workload, I/O trace history for the first workload, and/or workload type of the first workload. The host computer generates the I/O trace of the first workload for the selected length of time. | 05-26-2016 |
20160147657 | SYSTEM AND METHOD FOR OPTIMIZED DISK IO RAM CACHING FOR A VDI ENVIRONMENT - A system and method provides optimized caching to RAM of disk input/output operations in a virtual environment, such as a virtual desktop infrastructure environment (VDI), thereby reducing the I/O operations per second (IOPS). Generally, existing technologies allocate a fixed amount of RAM for caching based on static criteria and do not consider the actual RAM utilization at a particular point in time. The system and method provides a mechanism for lowering the costs associated with TOPS by utilizing relevant assessment techniques to determine actual RAM usage. The system and method provides an information handling system to dynamically allocate cache so as to optimize the allocation of RAM to cache for use by I/O operations in any virtual environment, for example, a VDI environment. By providing a dynamic allocation the disk requirements for implementing a VDI environment may be reduced. | 05-26-2016 |
20160147665 | WORKLOAD SELECTION AND CACHE CAPACITY PLANNING FOR A VIRTUAL STORAGE AREA NETWORK - Exemplary methods, apparatuses, and systems receive a first input/output (I/O) trace from a first workload and run the first I/O trace through a cache simulation to determine a first miss ratio curve (MRC) for the first workload. A second I/O trace from the first workload is received and run through the cache simulation to determine a second MRC for the first workload. First and second cache sizes corresponding to a target miss rate for the first workload are determined using the first and second MRCs. A fingerprint of each of the first and I/O traces is generated. The first cache size, the second cache size, or a combination of the first and second cache sizes is selected as a cache size for the first workload based upon a comparison of the first and second fingerprints. A recommended cache size is generated based upon the selected cache size. | 05-26-2016 |
20160154590 | Memory Access Processing Method, Apparatus, and System | 06-02-2016 |
20160154739 | DISPLAY DRIVING APPARATUS AND CACHE MANAGING METHOD THEREOF | 06-02-2016 |
20160162217 | MEMORY ADDRESS REMAPPING SYSTEM, DEVICE AND METHOD OF PERFORMING ADDRESS REMAPPING OPERATION - A memory system includes an address remapping circuit and a first set of memory devices. The address remapping circuit includes a plurality of input terminals for receiving a plurality of chip selection signals and a plurality of chip identification signals. The address remapping circuit receives input signals corresponding to a portion of the plurality of chip selection signals and the plurality of chip identification signals through corresponding input terminals of the plurality of input terminals and generates a plurality of internal chip selection signals based on the input signals and a remapping control signal. Each of the first set of memory devices is configured to be selected in response to a corresponding internal chip selection signal of the plurality of internal chip selection signals. | 06-09-2016 |
20160162410 | DEMOTE INSTRUCTION FOR RELINQUISHING CACHE LINE OWNERSHIP - A computer system processor of a multi-processor computer system having a cache subsystem, the computer system having exclusive ownership of a cache line, executes a demote instruction to cause its own exclusively owned cache line to become shared or read-only in the computer processor cache. | 06-09-2016 |
20160170677 | Combined Transparent/Non-Transparent Cache | 06-16-2016 |
20160170876 | MANAGED RUNTIME CACHE ANALYSIS | 06-16-2016 |
20160170878 | APPARATUS, SYSTEM AND METHOD FOR CACHING COMPRESSED DATA | 06-16-2016 |
20160170888 | INTERRUPTION OF A PAGE MISS HANDLER | 06-16-2016 |
20160170889 | Systems and Methods for Random Fill Caching and Prefetching for Secure Cache Memories | 06-16-2016 |
20160170893 | NEAR CACHE DISTRIBUTION IN IN-MEMORY DATA GRID (IMDG)(NO-SQL) ENVIRONMENTS | 06-16-2016 |
20160170895 | HIERARCHY MEMORY MANAGEMENT | 06-16-2016 |
20160170911 | SYSTEMS AND/OR METHODS FOR POLICY-BASED ACCESS TO DATA IN MEMORY TIERS | 06-16-2016 |
20160179685 | DATA PROCESSING SYSTEM AND OPERATING METHOD THEREOF | 06-23-2016 |
20160181318 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME | 06-23-2016 |
20160181520 | ELECTRONIC DEVICE AND METHOD FOR FABRICATING THE SAME | 06-23-2016 |
20160188209 | APPARATUS AND METHOD FOR ISSUING ACCESS REQUESTS TO A MEMORY CONTROLLER - An apparatus and method are provided for issuing access requests to a memory controller for a memory device whose memory structure consists of a plurality of sub-structures. The apparatus has a request interface for issuing access requests to the memory controller, each access request identifying a memory address. Within the apparatus static abstraction data is stored providing an indication of one or more of the sub-structures of the memory device, and the apparatus also stores an indication of outstanding access requests issued from the request interface. Next access request selection circuitry is then arranged to select from a plurality of candidate access requests a next access request to issue from the request interface. That selection is dependent on sub-structure indication data that is derived from application of an abstraction data function, using the static abstraction data, to the memory addresses of the candidate access requests and the outstanding access requests. Such an approach enables the apparatus to provide a series of access requests to the memory controller with the aim of enabling the memory controller to perform a more optimal access sequence with regard to the memory device. | 06-30-2016 |
20160188481 | Integrated Main Memory And Coprocessor With Low Latency - System, method, and apparatus for integrated main memory (MM) and configurable coprocessor (CP) chip for processing subset of network functions. Chip supports external accesses to MM without additional latency from on-chip CP. On-chip memory scheduler resolves all bank conflicts and configurably load balances MM accesses. Instruction set and data on which the CP executes instructions are all disposed on-chip with no on-chip cache memory, thereby avoiding latency and coherency issues. Multiple independent and orthogonal threading domains used: a FIFO-based scheduling domain (SD) for the I/O; a multi-threaded processing domain for the CP. The CP is an array of independent, autonomous, unsequenced processing engines processing on-chip data tracked by SD of external CMD and reordered per FIFO CMD sequence before transmission. Paired I/O ports tied to unique global on-chip SD allow multiple external processors to slave chip and its resources independently and autonomously without scheduling between the external processors. | 06-30-2016 |
20160188486 | Cache Accessed Using Virtual Addresses - A computer architecture provides a memory cache that is accessed not by physical addresses but by virtual addresses directly from running processes. Ambiguities that can result from multiple virtual addresses mapping to a single physical address are handled by dynamically tracking synonyms and connecting a limited number of virtual synonyms mapping to the same physical address to a single key virtual address that is used exclusively for cache access. | 06-30-2016 |
20160196209 | MEMORY CONTROLLER, METHOD OF CONTROLLING THE SAME, AND SEMICONDUCTOR MEMORY DEVICE HAVING BOTH | 07-07-2016 |
20160202935 | DISTRIBUTED FILE SYSTEM WITH SPECULATIVE WRITING | 07-14-2016 |
20160253094 | INFORMATION PROCESSING DEVICE, DATA CACHE DEVICE, INFORMATION PROCESSING METHOD, AND DATA CACHING METHOD | 09-01-2016 |
20160253106 | DATA DEPLOYMENT DETERMINATION APPARATUS, DATA DEPLOYMENT DETERMINATION PROGRAM, AND DATA DEPLOYMENT DETERMINATION METHOD | 09-01-2016 |
20160378359 | STORAGE DEVICE INCLUDING NONVOLATILE MEMORY DEVICE - A storage device including a nonvolatile memory device is provided. The storage device may include: a nonvolatile memory device; and a controller configured to control a read operation of the nonvolatile memory device according to a read request from an external host device. The controller is configured to read map data including a segment, and to store different types of map data in an internal random access memory (RAM) based on determining whether the segment corresponds to sequential data. | 12-29-2016 |
20160378651 | APPLICATION DRIVEN HARDWARE CACHE MANAGEMENT - A processor includes a processing core to generate a memory request for an application data in an application. The processor also includes a virtual page group memory management (VPGMM) unit coupled to the processing core to specify a caching priority (CP) to the application data for the application. The caching priority identifies importance of the application data in a cache. | 12-29-2016 |
20160378664 | SUPPORTING FAULT INFORMATION DELIVERY - A processor implementing techniques to supporting fault information delivery is disclosed. In one embodiment, the processor includes a memory controller unit to access an enclave page cache (EPC) and a processor core coupled to the memory controller unit. The processor core to detect a fault associated with accessing the EPC and generate an error code associated with the fault. The error code reflects an EPC-related fault cause. The processor core is further to encode the error code into a data structure associated with the processor core. The data structure is for monitoring a hardware state related to the processor core. | 12-29-2016 |
20170235684 | METHOD AND DEVICE FOR CHECKING VALIDITY OF MEMORY ACCESS | 08-17-2017 |
20180024610 | APPARATUS AND METHOD FOR SETTING A CLOCK SPEED/VOLTAGE OF CACHE MEMORY BASED ON MEMORY REQUEST INFORMATION | 01-25-2018 |
20180024922 | JOIN OPERATIONS IN HYBRID MAIN MEMORY SYSTEMS | 01-25-2018 |
20180024928 | MODIFIED QUERY EXECUTION PLANS IN HYBRID MEMORY SYSTEMS FOR IN-MEMORY DATABASES | 01-25-2018 |
20180024934 | SCHEDULING INDEPENDENT AND DEPENDENT OPERATIONS FOR PROCESSING | 01-25-2018 |
20180024937 | CACHING AND TIERING FOR CLOUD STORAGE | 01-25-2018 |
20190146918 | MEMORY BASED CONFIGURATION STATE REGISTERS | 05-16-2019 |
20190146924 | METHOD AND SYSTEM FOR MATCHING MULTI-DIMENSIONAL DATA UNITS IN ELECTRONIC INFORMATION SYSTEM | 05-16-2019 |
20220137818 | UTILIZATION OF A DISTRIBUTED INDEX TO PROVIDE OBJECT MEMORY FABRIC COHERENCY - Embodiments of the invention provide systems and methods to implement an object memory fabric. Object memory modules may include object storage storing memory objects, memory object meta-data, and a memory module object directory. Each memory object and/or memory object portion may be created natively within the object memory module and may be a managed at a memory layer. The memory module object directory may index all memory objects and/or portions within the object memory module. A hierarchy of object routers may communicatively couple the object memory modules. Each object router may maintain an object cache state for the memory objects and/or portions contained in object memory modules below the object router in the hierarchy. The hierarchy, based on the object cache state, may behave in aggregate as a single object directory communicatively coupled to all object memory modules and to process requests based on the object cache state. | 05-05-2022 |
20220138027 | METHOD FOR TRANSMITTING A MESSAGE IN A COMPUTING SYSTEM, AND COMPUTING SYSTEM - In a method for transmitting a message in a computing system, the message is transmitted by a transmitter and received by a receiver. The transmitter is granted access to a memory area for the purpose of transmitting using a first virtual address allocated to the transmitter by a memory management unit, whereas the access to the memory area by the transmitter is revoked after transmitting. Subsequently, the receiver is granted access to the memory area for the purpose of receiving using a second virtual address allocated to the receiver by a memory management unit. The first virtual address may be different from the second virtual address. | 05-05-2022 |
20220138102 | Intelligent Content Migration with Borrowed Memory - Systems, methods and apparatuses to intelligently migrate content involving borrowed memory are described. For example, after the prediction of a time period during which a network connection between computing devices having borrowed memory degrades, the computing devices can make a migration decision for content of a virtual memory address region, based at least in part on a predicted usage of content, a scheduled operation, a predicted operation, a battery level, etc. The migration decision can be made based on a memory usage history, a battery usage history, a location history, etc. using an artificial neural network; and the content migration can be performed by remapping virtual memory regions in the memory maps of the computing devices. | 05-05-2022 |
20220138107 | CACHE FOR STORING REGIONS OF DATA - Systems, apparatuses, and methods for efficiently performing memory accesses in a computing system are disclosed. A computing system includes one or more clients, a communication fabric and a last-level cache implemented with low latency, high bandwidth memory. The cache controller for the last-level cache determines a range of addresses corresponding to a first region of system memory with a copy of data stored in a second region of the last-level cache. The cache controller sends a selected memory access request to system memory when the cache controller determines a request address of the memory access request is not within the range of addresses. The cache controller services the selected memory request by accessing data from the last-level cache when the cache controller determines the request address is within the range of addresses. | 05-05-2022 |